Uploading/downloading large files can be tedious, especially when you’re unable to view the progress and status of the request. Using the requests library alongside 
In the snippets below, you’ll see a function ‘progress.bar’ called. This is contained inside if the 
Required imports:
import requests
import os
from clint.textui import progress
GET Requests (Downloading)
Steps:
- Send GET request to the file we’d like to download.
 - Open a file object to the path we’re downloading to, allowing us to continuously append bytes to the end of the file.
 - Determine the number of chunks we’ll be downloading, based on the “Content-Length” response header.
 - Use the iter_content to convert the response into a chunk iterator.
 - Enumerate chunks, writing each one into the output file
 
def download_file(url, path):
    # STEP 1: send the request. note that stream must be True.
    response = requests.get(url, stream = True)
    if response.status_code != 200: return False
    # STEP 2: open the file object to a local path, so we can write each chunk of data.
    with open(path, 'wb') as file:
        # STEP 3: determine the number of chunks we'll be downloading, use the "Content-Length" header
        total_length = int(response.headers.get("Content-Length"))
        count = (total_length / 1024) + 1
        # STEP 4: use iter_content to get list of cunks, and use use 'clint' to log progress
        chunks = response.iter_content(1024)
        chunker = progress.bar(chunks,  expected_size = count, label = "downloading: ")
        # STEP 5: loop through chunks, write them to file
        for chunk in chunker:
            if not chunk: break
            file.write(chunk)
            file.flush()
    return True
PUT Requests (Uploading)
Steps:
- Open a file object to the path we’ll be uploading data from, allowing us to read chunked content from the file.
 - Determine the number of chunks we’ll be uploading, by getting the size of the file on the disk.
 - Read the file into an enumerable list of chunks.
 - Send the PUT request to upload the chunked data.
 
def get_chunks(file, chunk_size = 1):
    while True:
        data = file.read(chunk_size)
        if not data: break
        yield data
def upload_file(local_path, url):
    # STEP 1: open a file object to read content from the local file path
    with open(local_path,'rb') as file:
        # STEP 2: determine the total number of chunks we're uploading from the file size
        total_length = os.path.getsize(local_path)
        count = (total_length / 1024) + 1
        # STEP 3: read the file into an enumerable container of chunks, using the get_chunks function
        chunks = get_chunks(file, 1024)
        chunker = progress.bar(chunks, expected_size = count, label = "uploading: ")
        # STEP 4: send the post request, with chunked iter
        requests.put(url, data = chunker)
			
Greate post. Keep posting such kind of information on your blog.
Thanks-a-mundo for the post. Much thanks again. Want more.
Fantastic website. Plenty of useful info here. I am sending it to several friends ans also sharing in delicious. And naturally, thanks on your effort!