I have a bash script that downloads a lot of files from a URL and saves them to my Computer like:
curl 'http://website.com/shop.php?store_id=[66-900]&page=[7-444]&PHPSESSID=phpsessid' -O
(and a bunch of headers set, the typical request you get when you copy a request as curl in firefox)
Tried to rewrite the script in python but turns out it is very hard (complete newbie) like:
import requests
cookies = {
'PHPSESSID': '<phpsessid here>',
}
headers = {
'<bunch of header stuff here>'
}
params = (
('store_id', '[66-900]'),
('page', '[7-444]'),
('PHPSESSID', '<phpsessid here>'),
)
response = requests.get('http://website.com/shop.php', headers=headers, params=params, cookies=cookies)
But it just doesn't do anything (curl command downloading everything as commanded) I realised I may need to write my downloaded stuff to a file but am completly lost since I am not handling one file but hundreds of files and as seen from the curl command above.
Completly lost what I am missing or need to do to replicate a curl request like that in python, ANY help appreciated!
requestsmay get something but it will not save it in file automatically. You should check what you have inresponse.contentand write code which write it in file.for-loop and use single value instore_idinstead of[66-900]and the same withpage