I ssh to my EC2 instance. I can run these commands and they work perfectly:
aws sqs list-queues
aws s3 ls
I have a small Python script that pulls data from a database, formats it as XML, and then uploads the file to S3. This upload fails with this error:
Traceback (most recent call last):
File "./data_test/data_analytics/lexisnexis/async2.py", line 289, in <module>
insert_parallel(engine, qy, Create_Temp.profile_id, nworkers)
File "./data_test/data_analytics/lexisnexis/async2.py", line 241, in insert_parallel
s3upload(bucketname, keyname, f)
File "./data_test/data_analytics/lexisnexis/async2.py", line 89, in s3upload
bucket = conn.get_bucket(bucketname)
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 506, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 525, in head_bucket
response = self.make_request('HEAD', bucket_name, headers=headers)
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 668, in make_request
retry_handler=retry_handler
File "/usr/lib/python2.7/dist-packages/boto/connection.py", line 1071, in make_request
retry_handler=retry_handler)
File "/usr/lib/python2.7/dist-packages/boto/connection.py", line 1030, in _mexe
raise ex
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
How can I have a script that dies, even when aws cli works?
To be clear, I'm running the Python script as the same user, from the same EC2 instance, as I run the aws cli commands.
aws --version
aws-cli/1.11.176 Python/2.7.12 Linux/4.9.43-17.38.amzn1.x86_64 botocore/1.7.34