python python-3.x python-requests selenium web-scraping

Fill forms using selenium or requests

I’m trying to enter this site to retrieve my bank account, first I tried with selenium, but only filled username (maybe because it has 2 forms):

from selenium import webdriver
driver = webdriver.Firefox()
user = driver.find_element_by_name("usr")
pas = driver.find_element_by_name("claveConsultiva")
login = driver.find_element_by_id("login_button").click()

Then, I gone rambo mode 🙂 trying figured out why I can’t fill password space, and what are the hidden values of the form using requests, this is the code:

url = ",,276_1_2,00.html"     
user_agent = {"user-agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/..."}
session = requests.session()
r = session.get(url)
soup = BeautifulSoup(r.text, "html.parser")
data = {t['name']:t.get('value') for t in soup.find_all('input', attrs={'type': 'hidden'})}

But just received an empty dict. What is the best approach for enter a site with login and scrape?

Once you access the url first you have to click on the element with text as Login then only the the Nome and Password field appears but to access those fileds you have to switch to the frame with id as ws inducing WebDriverWait. Next to locate the element of Nome you have to induce WebDriverWait again as follows :

from selenium import webdriver
from import By
from import WebDriverWait
from import expected_conditions as EC
WebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.ID, "ws")))
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//input[@class="inputlong" and @id='identificacionUsuario']"))).send_keys("your_name")
driver.find_element_by_xpath("//input[@id='claveConsultiva' and @name="claveConsultiva"]").send_keys("your_password")
driver.find_element_by_link_text("Entrar no NetBanco Particulares").click()

Here you can find a relevant discussion on Ways to deal with #document under iframe