Python Basic steps of crawling web data :

from urllib import request
response = request.urlopen(‘ Full URL ’)

import requests
import chardet
url = ‘ Full URL ’
response = requests.get(url)
response.encoding = chardet.detect(response.content)[‘encoding’]
# text
html = response.text

selenium ( Dynamically loaded Webpage , That's it )
from selenium import webdriver

scrapy frame

----- extract content ------
Generally through Browsing console , Look for it first Unified structure . Then find the parent element
1. regular expression
2. beautifulsoup
3. selenium The related methods of
4. xpath

----- storage content -------
1. txt
2. csv
3. excel
4. mongodb
5. mysql

©2020 ioDraw All rights reserved
python gui Interface examples -Python GUI Programming complete example IO flow —— Classification of flow ,InputStream,OutputStream,Reader,Writer etc. Thread.setDaemon Set up the Daemons STM32 RS485 modbus Communication source program , The test is effective stay MATLAB 2018 Time series prediction with deep learning vue3 Write component library and publish to npm The process of Questions and answers for the interviewer : How did you get a project in the project team ??? In the applet form Multiple cycles input How to get each input Value of 2021 Front end high frequency interview questions ( You have to watch it !!!)js Array method