I’ve followed the guide here https://deliciousbrains.com/wp-offload-ses/doc/custom-iam-policy-for-amazon-ses/ but when I use the policy outlined, it shows no verified senders in the plugin.
I can add a custom policy with full SES access to all resources and that works. If i limit it to the list provided in the above link with * resources, it shows no identities. Also if I grant full access (which does work) but then limit to my two identity ARNS, it again shows no results.
Any ideas what might be going wrong here?
Example of the policy that actuall works (but shows everything):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ses:*",
"Resource": "*"
}
]
}
Example of the policy with resources restricted that doesn’t show any verified senders:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ses:*",
"Resource": [
"arn:aws:ses:ap-southeast-2:12345678910:identity/mydomain.com",
"arn:aws:ses:ap-southeast-2:12345678910:identity/[email protected]"
]
}
]
}
Also tried limiting the services with all permission as per those docs and that doesn’t work, eg:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ses:VerifyEmailIdentity",
"ses:GetSendQuota",
"ses:SendRawEmail",
"ses:DeleteIdentity",
"ses:GetIdentityVerificationAttributes",
"ses:ListIdentities",
"ses:VerifyDomainIdentity"
],
"Resource": "*"
}
]
}
]]>I wanted to see if you had plans to allow usage of instance profiles instead of supplying ACCESS/SECRET keys? I think from the code perspective, it should be a fairly simple change since AWS SDK has full support for that. Thanks.
P.S. I’ll definitely buy you a few cups of coffee if it is implemented
I’ve been running into a very peculiar issue since a few weeks now. If I keep the switch ‘Remove files from server’ on, it will result in a corrupt image and a 403 Forbidden error in my error.log
Setup:
– Cloudflare: s.mydomainname.nl
– Amazon S3 bucket: s.mydomainname.nl with bucket policy as stated by Cloudflare, static website disabled.
– Offload Media, setup with Cloudflare as CDN
– Smush Pro for optimisation
Steps:
– Upload a new (hi-res) image
– All different sizes (including the original) get smushed copied to S3
– All sizes are removed from the server perfectly except for the orginal and the new “-scaled”-version that WordPress has generated for me.
– That “scaled”-version is the new default size in the database, but when I copy the S3 link to that file, the image is corrupt (243b size, shows a blank square in the browser)
– If i copy the link to the original image on S3, it’s available as well.
– If i look at the error log of my server, it shows an XML-error, Acces Denied by Amazon. Apparently, there was an attempt to execute GetObject to ‘s3.amazonaws.com.s.mydomainname.nl/wp-content/uploads/2020/11/file.jpg’ – which also throws a 403 in the browser.
If I disable the option to remove files from the server, the whole process works flawless. Except that I don’t want my server to fill up with images…
I’ve been trying loads of things, including an extra statement to the bucket policy granting the IAM user for this job access as well, but to no avail.
Any help would be largely appreciated!
Cheers,
]]>We use IAM roles so a EC2 instance has access to the CDN S3 bucket.
The cache setting however require specific AWS credentials. Can I use
IAM instance profile with W3 Total Cache?
Another problem I noticed the caches seems to require permissions to
list all S3 buckets.
Are there any guidelines what permissions should AWS client have to work
with W3 Total Cache?
Error retrieving credentials from the instance profile metadata server. (Client error:
GET https://169.254.169.254/latest/meta-data/iam/security-credentials/
resulted in a404 Not Found
response:
We have (i) installed the plugin with no apparent issues, (ii) created an IAM user per the Amazon S3 Quick Start Guide, and (iii) implemented the “preferably with” modification to wp-config.php to permit us to use an IAM role instead of defining access keys within the wp-config.php file, as set forth here.
The plugin appears to detect our wp-config.php modification, as it displays the alert “defined in wp-config.php” on the Media Library tab of the plugin configuration dialog.
For additional context, we do have WordPress installed in an Amazon EC2 instance and if this matters, do use Cloudflare in front of AWS, with Cloudflare as proxy.
Ours is a fresh WordPress install with only two plugins: WP Offload Media Lite (v2.3.2) and WP Offload SES Lite (v1.4.1). We use Astra theme 2.4.5. The site health check reports no issues.
]]>when multiple domains or email addresses, even for foreign domains / customers are managed in SES, they are all listed. Full SES access introduces a sever security risk, though.
I tried locking down access but w/o full SES permissions the plugin won’t validate the setup.
How can this be adjusted to improve security?
Thanks in advance
Mike
I have a question.
When adding IAM user which existing IAM policy should we attach to the user?
Should I add “IAMAccessAnalyzerFullAccess” to the IAM user I created?
Please advise.
]]>WordPress version 5.1.1
BackWPup version 3.6.0
PHP version 7.3.7 (64bit)
cURL version 7.19.7
cURL SSL version NSS/3.27.1
Could it be my php.ini restrictions that prevent this?
Could it be S3 and some 5GB filesize thing I’ve seen online?
It is not either of those…because I wrote a PHP script to copy to S3 that used my php.ini.
This would test both questions -> php -c /etc/php.ini /tmp/cp.php
S3 is mounted as a directory/partition on our server.
# cat tmp/cp.php
<?php
$fn = ‘./BIG-6GB-FILE.tar.gz’;
$newfn = ‘/home/MOUNTPOINT/S3/BIG-6GB-FILE.tar.gz’;
if(copy($fn,$newfn))
echo ‘The file was copied successfully’;
else
echo ‘An error occurred during copying the file’;
?>
The php-script completed successfully and copied to S3.
So S3 can accept a file bigger than 5GB.
And our PHP.INI file seems to be fine to allow it as well.
===========Here’s a snippet of our log file==================
…
[09-Jul-2019 09:55:48] Backup archive created.
[09-Jul-2019 09:55:48] Archive size is 6.84 GB.
[09-Jul-2019 09:55:48] 30870 Files with 8.66 GB in Archive.
[09-Jul-2019 09:55:52] One backup file deleted
[09-Jul-2019 09:55:52] Restart after 6 seconds.
[09-Jul-2019 09:55:55] 1. Trying to send backup file to S3 Service …
[09-Jul-2019 09:55:55] Connected to S3 Bucket “OUR-S3-BUCKET” in us-west-2
[09-Jul-2019 09:55:55] Starting upload to S3 Service …
[09-Jul-2019 10:01:13] WARNING: Job restarts due to inactivity for more than 5 minutes.
…
[09-Jul-2019 10:03:24] 3. Trying to send backup file to S3 Service …
[09-Jul-2019 10:03:24] Connected to S3 Bucket “OUR-S3-BUCKET” in us-west-2
[09-Jul-2019 10:03:24] Starting upload to S3 Service …
[09-Jul-2019 10:05:05] ERROR: S3 Service API: [curl] 55: [url] https://OUR-S3-BUCKET/OUR-BIG-FILE
[09-Jul-2019 10:05:05] Restart after 109 seconds.
…
[09-Jul-2019 10:05:22] 4. Trying to send backup file to S3 Service …
[09-Jul-2019 10:05:23] Connected to S3 Bucket “OUR-S3-BUCKET” in us-west-2
[09-Jul-2019 10:05:23] Starting upload to S3 Service …
[09-Jul-2019 10:06:54] ERROR: S3 Service API: Your proposed upload exceeds the maximum allowed size
[09-Jul-2019 10:06:54] Restart after 107 seconds.
…
[09-Jul-2019 10:09:10] 6. Trying to send backup file to S3 Service …
[09-Jul-2019 10:09:10] Connected to S3 Bucket “OUR-S3-BUCKET” in us-west-2
[09-Jul-2019 10:09:10] Starting upload to S3 Service …
[09-Jul-2019 10:10:03] ERROR: S3 Service API: [curl] 55: [url] https://OUR-S3-BUCKET/OUR-BIG-FILE
[09-Jul-2019 10:10:03] Restart after 88 seconds.
[09-Jul-2019 10:10:05] ERROR: Step aborted: too many attempts!
[09-Jul-2019 10:10:05] One old log deleted
[09-Jul-2019 10:10:05] ERROR: Job has ended with errors in 2840 seconds. You must resolve the errors for correct execution.
I setup a custom policy to set the permissions for the user to have full permissions on the specific bucket that I set up. When I save the settings with the plugin, it tells me it doesn’t have enough permissions.
Is this because the plugin is expecting full access to S3 as a whole to all buckets, or is it possible to let it have full access to a single bucket?
Thanks for your time.
]]>