Day 25 PTT八卦版爬取

终於可以踏出新手村了!
经历扎实的训练後,我们已经有相当实力来爬取想要的网站啦~
今天的影片内容为爬取知名论坛PTT的八卦版,会详细的解析网页并一步步扩充程序码
那事不宜迟,出发罗~

以下为影片中有使用到的程序码

import requests, bs4

url = "https://www.ptt.cc/bbs/Gossiping/index.html"
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36'}
cookies = {'over18':'1'}
htmlfile = requests.get(url, headers = headers, cookies = cookies)
objsoup = bs4.BeautifulSoup(htmlfile.text, 'lxml')

articles = objsoup.find_all('div', class_ = 'r-ent') #寻找文章区块

number = 0

for article in articles:

    title = article.find('a')
    author = article.find('div', class_ = 'author')
    date = article.find('div',  class_ = 'date')
    
    if title == None: #防止(本文已被删除)的情形
        continue
    else:
        number += 1 #number = number +1
        print("文章编号:", number)
        print("文章标题:", title.text)
        print("文章作者:", author.text)
        print("发文时间:", date.text)
        print("="*100)
import requests, bs4

url_1 = "https://www.ptt.cc"
url_2 = "/bbs/Gossiping/index.html"

headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36'}
cookies = {'over18':'1'}
htmlfile = requests.get(url_1 + url_2, headers = headers, cookies = cookies)
objsoup = bs4.BeautifulSoup(htmlfile.text, 'lxml')

articles = objsoup.find_all('div', class_ = 'r-ent')

number = 0

for article in articles:

    title = article.find('a')
    author = article.find('div', class_ = 'author')
    date = article.find('div',  class_ = 'date')
    
    if title == None: #防止(本文已被删除)的情形
        continue
    else:
        number +=1
        print("文章编号:", number)
        print("文章标题:", title.text)
        print("文章作者:", author.text)
        print("发文时间:", date.text)
        print("="*100)
        
before = objsoup.find_all('a', class_ = 'btn wide') #寻找上一页网址
url_2 = before[1].get('href') #将上一页网址加入website串列中
print("上一页的网址:", url_1 + url_2)
import requests, bs4

page = int(input("请输入想搜寻的页数:")) 
url_1 = "https://www.ptt.cc"
url_2 = "/bbs/Gossiping/index.html"

counter = 0
number = 0

while counter < page:

    headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36'}
    cookies = {'over18':'1'}
    htmlfile = requests.get(url_1 + url_2, headers = headers, cookies = cookies)
    objsoup = bs4.BeautifulSoup(htmlfile.text, 'lxml')

    articles = objsoup.find_all('div', class_ = 'r-ent')

    for article in articles:

        title = article.find('a')
        author = article.find('div', class_ = 'author')
        date = article.find('div',  class_ = 'date')
    
        if title == None: #防止(本文已被删除)的情形
            continue
        else:
            number +=1
            print("文章编号:", number)
            print("文章标题:", title.text)
            print("文章作者:", author.text)
            print("发文时间:", date.text)
            print("="*100)
            
    before = objsoup.find_all('a', class_ = 'btn wide')
    url_2 = before[1].get('href')
    counter += 1

本篇影片及程序码仅提供研究使用,请勿大量恶意地爬取资料造成对方网页的负担呦!
如果在影片中有说得不太清楚或错误的地方,欢迎留言告诉我,谢谢您的指教。


<<:  day24 stateflow和shareflow是如何取代livedata的,聊聊use case吧!!

>>:  应用程序快速更新还原,让服务持续运作不中断,公司财源滚滚,老板开心,大家开心

Day15 - this&Object Prototypes Ch3 Objects - Iteration 开头

搭配 in 的 for 回圈会搜寻物件中所有 property 为 enumerable 者 而使...

Day 31:RecyclerView Loads More

本来先看了 paging 的相关资料,发现顺序有点不太对,应该先处理 RecyclerView 下滑...

NIST SDLC和RMF

安全控制是风险处理的一部分,风险评估之後进行。安全控制的范围是根据风险评估的结果确定的。根据NIST...

Dungeon Mizarka 001

第一人称地城冒险游戏介绍 第一人称地城冒险游戏(FP Dungeon Crawler, FPDC)类...

D-21 委派 ? delegate ? Action ? Linq

物件导向之後呢 小光跟着大头从最基础的基本语法学习到方法以及物件导向,那接下来要怎麽让开发的速度更快...