Python WebScrap With BeautifulSoup - Proxy Error Handler










0















I am trying to webscrap ETFs daily information with Python and BeautifulSoup. My code extracts info from Wall Street Journal Page. But I get a max number of retries.
I succesfully scraped 10+ ETFs in one run but now I am trying to scrap new ETFs but I keep getting this proxy error:




ProxyError: HTTPSConnectionPool(host='quotes.wsj.com', port=443): Max
retries exceeded with url: /etf/ACWI (Caused by ProxyError('Cannot
connect to proxy.', error('Tunnel connection failed: 407 Proxy
Authorization Required',)))




I was wondering if there is a way to handle this error. My code is the following:



import requests
from bs4 import BeautifulSoup
import pandas as pd

ticker_list = ["ACWI", "AGG", "EMB", "VTI", "GOVT", "IEMB", "IEMG", "EEM", "PCY", "CWI", "SPY", "EMLC"]
x = len(ticker_list)

date, open_list, previous_list, assets_list, nav_list, shares_list = ( for a in range(6))

for i in range(0,x):
ticker = ticker_list[i]
date.append("20181107")
link = "https://quotes.wsj.com/etf/" + ticker
proxies = "http":"http://username:password@proxy_ip:proxy_port"
r = requests.get(link, proxies=proxies)
#print (r.content)
html = r.text
soup = BeautifulSoup(html, "html.parser")

aux_list, aux_list_2 = ( for b in range(2))

for data in soup.find_all("ul", attrs="class":"cr_data_collection"):
for d in data:
if d.name == "li":
aux_list.append(d.text)
print(d.text)
print ("Start List Construction!")
k = len(aux_list)
for j in range(0,k):
auxiliar =
if "Volume" in aux_list[j]:
auxiliar = aux_list[j].split()
volume = auxiliar[1]
if "Open" in aux_list[j]:
auxiliar = aux_list[j].split()
open_price = auxiliar[1]
open_list.append(auxiliar[1])
if "Prior Close" in aux_list[j]:
auxiliar = aux_list[j].split()
previous_price = auxiliar[2]
previous_list.append(auxiliar[2])
if "Net Assets" in aux_list[j]:
auxiliar = aux_list[j].split()
net_assets = auxiliar[2] # In Billions
assets_list.append(auxiliar[2])
if "NAV" in aux_list[j]:
auxiliar = aux_list[j].split()
nav = auxiliar[1]
nav_list.append(auxiliar[1])
if "Shares Outstanding" in aux_list[j]:
auxiliar = aux_list[j].split()
shares = auxiliar[2] # In Millions
shares_list.append(auxiliar[2])

print ("Open Price: ", open_price, "Previous Price: ", previous_price)
print ("Net Assets: ", net_assets, "NAV: ", nav, "Shares Outstanding: ", shares)

print nav_list, len(nav_list)
print open_list, len(open_list)
print previous_list, len(previous_list)
print assets_list, len(assets_list)
print shares_list, len(shares_list)

data = "Fecha": date, "Ticker": ticker_list, "NAV": nav_list, "Previous Close": previous_list, "Open Price": open_list, "Net Assets (Bn)": assets_list, "Shares (Mill)": shares_list
df = pd.DataFrame(data, columns = ["Fecha", "Ticker", "Net Assets", "Previous Close", "Open Price", "NAV", "Shares"])
df

df.to_excel("C:\Users\labnrodriguez\Documents\out_WSJ.xlsx", sheet_name="ETFs", header = True, index = False) #, startrow = rows)


The output is the following table in a Excel file:



enter image description here










share|improve this question






















  • try add sleep between request

    – ewwink
    Nov 14 '18 at 10:42











  • Thank! I will give it a try

    – Nico Rodriguez
    Nov 14 '18 at 18:08











  • didn't work! It keeps throwing the same error

    – Nico Rodriguez
    Nov 15 '18 at 20:25















0















I am trying to webscrap ETFs daily information with Python and BeautifulSoup. My code extracts info from Wall Street Journal Page. But I get a max number of retries.
I succesfully scraped 10+ ETFs in one run but now I am trying to scrap new ETFs but I keep getting this proxy error:




ProxyError: HTTPSConnectionPool(host='quotes.wsj.com', port=443): Max
retries exceeded with url: /etf/ACWI (Caused by ProxyError('Cannot
connect to proxy.', error('Tunnel connection failed: 407 Proxy
Authorization Required',)))




I was wondering if there is a way to handle this error. My code is the following:



import requests
from bs4 import BeautifulSoup
import pandas as pd

ticker_list = ["ACWI", "AGG", "EMB", "VTI", "GOVT", "IEMB", "IEMG", "EEM", "PCY", "CWI", "SPY", "EMLC"]
x = len(ticker_list)

date, open_list, previous_list, assets_list, nav_list, shares_list = ( for a in range(6))

for i in range(0,x):
ticker = ticker_list[i]
date.append("20181107")
link = "https://quotes.wsj.com/etf/" + ticker
proxies = "http":"http://username:password@proxy_ip:proxy_port"
r = requests.get(link, proxies=proxies)
#print (r.content)
html = r.text
soup = BeautifulSoup(html, "html.parser")

aux_list, aux_list_2 = ( for b in range(2))

for data in soup.find_all("ul", attrs="class":"cr_data_collection"):
for d in data:
if d.name == "li":
aux_list.append(d.text)
print(d.text)
print ("Start List Construction!")
k = len(aux_list)
for j in range(0,k):
auxiliar =
if "Volume" in aux_list[j]:
auxiliar = aux_list[j].split()
volume = auxiliar[1]
if "Open" in aux_list[j]:
auxiliar = aux_list[j].split()
open_price = auxiliar[1]
open_list.append(auxiliar[1])
if "Prior Close" in aux_list[j]:
auxiliar = aux_list[j].split()
previous_price = auxiliar[2]
previous_list.append(auxiliar[2])
if "Net Assets" in aux_list[j]:
auxiliar = aux_list[j].split()
net_assets = auxiliar[2] # In Billions
assets_list.append(auxiliar[2])
if "NAV" in aux_list[j]:
auxiliar = aux_list[j].split()
nav = auxiliar[1]
nav_list.append(auxiliar[1])
if "Shares Outstanding" in aux_list[j]:
auxiliar = aux_list[j].split()
shares = auxiliar[2] # In Millions
shares_list.append(auxiliar[2])

print ("Open Price: ", open_price, "Previous Price: ", previous_price)
print ("Net Assets: ", net_assets, "NAV: ", nav, "Shares Outstanding: ", shares)

print nav_list, len(nav_list)
print open_list, len(open_list)
print previous_list, len(previous_list)
print assets_list, len(assets_list)
print shares_list, len(shares_list)

data = "Fecha": date, "Ticker": ticker_list, "NAV": nav_list, "Previous Close": previous_list, "Open Price": open_list, "Net Assets (Bn)": assets_list, "Shares (Mill)": shares_list
df = pd.DataFrame(data, columns = ["Fecha", "Ticker", "Net Assets", "Previous Close", "Open Price", "NAV", "Shares"])
df

df.to_excel("C:\Users\labnrodriguez\Documents\out_WSJ.xlsx", sheet_name="ETFs", header = True, index = False) #, startrow = rows)


The output is the following table in a Excel file:



enter image description here










share|improve this question






















  • try add sleep between request

    – ewwink
    Nov 14 '18 at 10:42











  • Thank! I will give it a try

    – Nico Rodriguez
    Nov 14 '18 at 18:08











  • didn't work! It keeps throwing the same error

    – Nico Rodriguez
    Nov 15 '18 at 20:25













0












0








0








I am trying to webscrap ETFs daily information with Python and BeautifulSoup. My code extracts info from Wall Street Journal Page. But I get a max number of retries.
I succesfully scraped 10+ ETFs in one run but now I am trying to scrap new ETFs but I keep getting this proxy error:




ProxyError: HTTPSConnectionPool(host='quotes.wsj.com', port=443): Max
retries exceeded with url: /etf/ACWI (Caused by ProxyError('Cannot
connect to proxy.', error('Tunnel connection failed: 407 Proxy
Authorization Required',)))




I was wondering if there is a way to handle this error. My code is the following:



import requests
from bs4 import BeautifulSoup
import pandas as pd

ticker_list = ["ACWI", "AGG", "EMB", "VTI", "GOVT", "IEMB", "IEMG", "EEM", "PCY", "CWI", "SPY", "EMLC"]
x = len(ticker_list)

date, open_list, previous_list, assets_list, nav_list, shares_list = ( for a in range(6))

for i in range(0,x):
ticker = ticker_list[i]
date.append("20181107")
link = "https://quotes.wsj.com/etf/" + ticker
proxies = "http":"http://username:password@proxy_ip:proxy_port"
r = requests.get(link, proxies=proxies)
#print (r.content)
html = r.text
soup = BeautifulSoup(html, "html.parser")

aux_list, aux_list_2 = ( for b in range(2))

for data in soup.find_all("ul", attrs="class":"cr_data_collection"):
for d in data:
if d.name == "li":
aux_list.append(d.text)
print(d.text)
print ("Start List Construction!")
k = len(aux_list)
for j in range(0,k):
auxiliar =
if "Volume" in aux_list[j]:
auxiliar = aux_list[j].split()
volume = auxiliar[1]
if "Open" in aux_list[j]:
auxiliar = aux_list[j].split()
open_price = auxiliar[1]
open_list.append(auxiliar[1])
if "Prior Close" in aux_list[j]:
auxiliar = aux_list[j].split()
previous_price = auxiliar[2]
previous_list.append(auxiliar[2])
if "Net Assets" in aux_list[j]:
auxiliar = aux_list[j].split()
net_assets = auxiliar[2] # In Billions
assets_list.append(auxiliar[2])
if "NAV" in aux_list[j]:
auxiliar = aux_list[j].split()
nav = auxiliar[1]
nav_list.append(auxiliar[1])
if "Shares Outstanding" in aux_list[j]:
auxiliar = aux_list[j].split()
shares = auxiliar[2] # In Millions
shares_list.append(auxiliar[2])

print ("Open Price: ", open_price, "Previous Price: ", previous_price)
print ("Net Assets: ", net_assets, "NAV: ", nav, "Shares Outstanding: ", shares)

print nav_list, len(nav_list)
print open_list, len(open_list)
print previous_list, len(previous_list)
print assets_list, len(assets_list)
print shares_list, len(shares_list)

data = "Fecha": date, "Ticker": ticker_list, "NAV": nav_list, "Previous Close": previous_list, "Open Price": open_list, "Net Assets (Bn)": assets_list, "Shares (Mill)": shares_list
df = pd.DataFrame(data, columns = ["Fecha", "Ticker", "Net Assets", "Previous Close", "Open Price", "NAV", "Shares"])
df

df.to_excel("C:\Users\labnrodriguez\Documents\out_WSJ.xlsx", sheet_name="ETFs", header = True, index = False) #, startrow = rows)


The output is the following table in a Excel file:



enter image description here










share|improve this question














I am trying to webscrap ETFs daily information with Python and BeautifulSoup. My code extracts info from Wall Street Journal Page. But I get a max number of retries.
I succesfully scraped 10+ ETFs in one run but now I am trying to scrap new ETFs but I keep getting this proxy error:




ProxyError: HTTPSConnectionPool(host='quotes.wsj.com', port=443): Max
retries exceeded with url: /etf/ACWI (Caused by ProxyError('Cannot
connect to proxy.', error('Tunnel connection failed: 407 Proxy
Authorization Required',)))




I was wondering if there is a way to handle this error. My code is the following:



import requests
from bs4 import BeautifulSoup
import pandas as pd

ticker_list = ["ACWI", "AGG", "EMB", "VTI", "GOVT", "IEMB", "IEMG", "EEM", "PCY", "CWI", "SPY", "EMLC"]
x = len(ticker_list)

date, open_list, previous_list, assets_list, nav_list, shares_list = ( for a in range(6))

for i in range(0,x):
ticker = ticker_list[i]
date.append("20181107")
link = "https://quotes.wsj.com/etf/" + ticker
proxies = "http":"http://username:password@proxy_ip:proxy_port"
r = requests.get(link, proxies=proxies)
#print (r.content)
html = r.text
soup = BeautifulSoup(html, "html.parser")

aux_list, aux_list_2 = ( for b in range(2))

for data in soup.find_all("ul", attrs="class":"cr_data_collection"):
for d in data:
if d.name == "li":
aux_list.append(d.text)
print(d.text)
print ("Start List Construction!")
k = len(aux_list)
for j in range(0,k):
auxiliar =
if "Volume" in aux_list[j]:
auxiliar = aux_list[j].split()
volume = auxiliar[1]
if "Open" in aux_list[j]:
auxiliar = aux_list[j].split()
open_price = auxiliar[1]
open_list.append(auxiliar[1])
if "Prior Close" in aux_list[j]:
auxiliar = aux_list[j].split()
previous_price = auxiliar[2]
previous_list.append(auxiliar[2])
if "Net Assets" in aux_list[j]:
auxiliar = aux_list[j].split()
net_assets = auxiliar[2] # In Billions
assets_list.append(auxiliar[2])
if "NAV" in aux_list[j]:
auxiliar = aux_list[j].split()
nav = auxiliar[1]
nav_list.append(auxiliar[1])
if "Shares Outstanding" in aux_list[j]:
auxiliar = aux_list[j].split()
shares = auxiliar[2] # In Millions
shares_list.append(auxiliar[2])

print ("Open Price: ", open_price, "Previous Price: ", previous_price)
print ("Net Assets: ", net_assets, "NAV: ", nav, "Shares Outstanding: ", shares)

print nav_list, len(nav_list)
print open_list, len(open_list)
print previous_list, len(previous_list)
print assets_list, len(assets_list)
print shares_list, len(shares_list)

data = "Fecha": date, "Ticker": ticker_list, "NAV": nav_list, "Previous Close": previous_list, "Open Price": open_list, "Net Assets (Bn)": assets_list, "Shares (Mill)": shares_list
df = pd.DataFrame(data, columns = ["Fecha", "Ticker", "Net Assets", "Previous Close", "Open Price", "NAV", "Shares"])
df

df.to_excel("C:\Users\labnrodriguez\Documents\out_WSJ.xlsx", sheet_name="ETFs", header = True, index = False) #, startrow = rows)


The output is the following table in a Excel file:



enter image description here







python http web-scraping beautifulsoup python-requests






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 13 '18 at 15:20









Nico RodriguezNico Rodriguez

256




256












  • try add sleep between request

    – ewwink
    Nov 14 '18 at 10:42











  • Thank! I will give it a try

    – Nico Rodriguez
    Nov 14 '18 at 18:08











  • didn't work! It keeps throwing the same error

    – Nico Rodriguez
    Nov 15 '18 at 20:25

















  • try add sleep between request

    – ewwink
    Nov 14 '18 at 10:42











  • Thank! I will give it a try

    – Nico Rodriguez
    Nov 14 '18 at 18:08











  • didn't work! It keeps throwing the same error

    – Nico Rodriguez
    Nov 15 '18 at 20:25
















try add sleep between request

– ewwink
Nov 14 '18 at 10:42





try add sleep between request

– ewwink
Nov 14 '18 at 10:42













Thank! I will give it a try

– Nico Rodriguez
Nov 14 '18 at 18:08





Thank! I will give it a try

– Nico Rodriguez
Nov 14 '18 at 18:08













didn't work! It keeps throwing the same error

– Nico Rodriguez
Nov 15 '18 at 20:25





didn't work! It keeps throwing the same error

– Nico Rodriguez
Nov 15 '18 at 20:25












1 Answer
1






active

oldest

votes


















1














You don't need to scrape their data in the first place. The etfdb-api Node.js package provides you with ETF data:



  • Ticker

  • Assets under Management

  • Open Price

  • Avg. Volumne

  • etc.

See my post here: https://stackoverflow.com/a/53859924/9986657






share|improve this answer






















    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53284149%2fpython-webscrap-with-beautifulsoup-proxy-error-handler%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    You don't need to scrape their data in the first place. The etfdb-api Node.js package provides you with ETF data:



    • Ticker

    • Assets under Management

    • Open Price

    • Avg. Volumne

    • etc.

    See my post here: https://stackoverflow.com/a/53859924/9986657






    share|improve this answer



























      1














      You don't need to scrape their data in the first place. The etfdb-api Node.js package provides you with ETF data:



      • Ticker

      • Assets under Management

      • Open Price

      • Avg. Volumne

      • etc.

      See my post here: https://stackoverflow.com/a/53859924/9986657






      share|improve this answer

























        1












        1








        1







        You don't need to scrape their data in the first place. The etfdb-api Node.js package provides you with ETF data:



        • Ticker

        • Assets under Management

        • Open Price

        • Avg. Volumne

        • etc.

        See my post here: https://stackoverflow.com/a/53859924/9986657






        share|improve this answer













        You don't need to scrape their data in the first place. The etfdb-api Node.js package provides you with ETF data:



        • Ticker

        • Assets under Management

        • Open Price

        • Avg. Volumne

        • etc.

        See my post here: https://stackoverflow.com/a/53859924/9986657







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Dec 19 '18 at 22:39









        JanJan

        564




        564



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53284149%2fpython-webscrap-with-beautifulsoup-proxy-error-handler%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            這個網誌中的熱門文章

            How to read a connectionString WITH PROVIDER in .NET Core?

            Node.js Script on GitHub Pages or Amazon S3

            Museum of Modern and Contemporary Art of Trento and Rovereto