Sometimes you might need to create an account and login to access the information you need. If you have a good HTTP library that handles logins and automatically sending session cookies (did I mention how awesome Requests is?), then you just need to login before it gets to work.
In many scenarios the data is available after login that you want to scrape. So to reach at the page where data is located you need to implement code in web scraper that automatically takes usename/email and
password to login into website, once login is done you can do crawling and parsing as required. We often have to write spiders that need to login to sites, in order to scrape data from them. Our customers provide us with the site, username and password, and we do the rest.