这里需要提前安装 Anaconda,安装之后才能添加 Anaconda 编译环境。
第二步,安装 Scrapy。单击图 1 界面右上角绿色加号按钮,弹出如图 2 所示的界面。输入并搜索“scrapy”,然后单击“Install Package”按钮。等待,直至出现“Pakage‘scrapy’ installed successfully”:scrapy startproject stockstar
其中 scrapy startproject 是固定命令,stockstar 是笔者设置的工程名字。POBOTSOXT_OBEY = True
robots.txt 是遵循 Robot 协议的一个文件,在 Scrapy 启动后,首先会访问网站的 robots.txt 文件,然后决定该网站的爬取范围。有时我们需要将此配置项设置为 False。在 settings.py 文件中,修改文件属性的方法如下。ROBOTSTXT_OBEY=False
右击 E:\stockstar\stockstar 文件夹,在弹出的快捷菜单中选择“Mark Directory as”命令→选择“Sources Root”命令,这样可以使得导入包的语法更加简洁,如图 4 所示。import scrapy from scrapy.loader import ItemLoader from scrapy.loader.processors import TakeFirst class StockstarItemLoader (ItemLoader): #自定义itemloader,用于存储爬虫所抓取的字段内容 default_output_processor = TakeFirst() class StockstarItem (scrapy.Item) : # 建立相应的字段 #define the fields for your item here like: #name = scrapy.Field() code = scrapy.Field() # 股票代码 abbr = scrapy.Field() # 股票简称 last_trade = scrapy.Field() # 最新价 chg_ratio = scrapy.Field() # 涨跌幅 chg_amt = scrapy.Field() # 涨跌额 chg_ratio_5min = scrapy.Field() # 5分钟涨幅 volumn = scrapy.Field() # 成交量 turn_over = scrapy.Field() # 成交额
from scrapy.exporters import JsonLinesItemExporter #默认显示的中文是阅读性较差的Unicode字符
#需要定义子类显示出原来的字符集(将父类的ensure_ascii属性设置为False即可)
class CustomJsonLinesItemExporter(JsonLinesItemExporter):
def __init__(self, file, **kwargs):
super (CustomJsonLinesItemExporter, self).__init__(file, ensure_ascii=False, **kwargs)
#启用新定义的Exporter类 FEED_EXPORTERS = {
'json':'stockstar.settings.CustomJsonLinesItemExporter',
}
...
#Configure a delay for requests for the same website (default: 0)
#See http:IIscrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
#See also autothrottle settings and docs DOWNLOAD DELAY = 0.25
cd stockstar
scrapy genspider stock quote.stockstar.com
import scrapy
from items import StockstarItem, StockstarItemLoaderclass StockSpider(scrapy.Spider):
name = 'stock' #定义爬虫名称
allowed_domains = ['quote.stockstar.com'] #定义爬虫域
start_urls = ['http://quote.stockstar.com/stock/ranklist_a_3_1_1.html']
#定义开始爬虫链接
def parse (self, response) : #撰写爬虫逻辑
page = int (response.url.split("_")[-1].split(".")[0])#抓取页码
item_nodes = response.css('#datalist tr')
for item_node in item_nodes:
#根据item文件中所定义的字段内容,进行字段内容的抓取
item_loader = StockstarItemLoader(item=StockstarItem(), selector = item_node)
item_loader.add_css("code", "td:nth-child(1) a::text")
item_loader.add_css("abbr", "td:nth-child(2) a::text")
item_loader.add_css("last_trade", “td:nth-child(3) span::text")
item_loader.add_css("chg_ratio", "td:nth-child(4) span::text")
item_loader.add_css("chg_amt", "td:nth-child(5) span::text")
item_loader.add_css("chg_ratio_5min","td:nth-child(6) span::text")
item_loader.add_css("volumn", "td:nth-child(7)::text")
item_loader.add_css ("turn_over", "td:nth-child(8) :: text")
stock_item = item_loader.load_item()
yield stock_item
if item_nodes:
next_page = page + 1
next_url = response.url.replace ("{0}.html".format (page) , "{0}.html".format(next_page))
yield scrapy.Request(url=next_url, callback=self.parse)
from scrapy.cmdline import execute
execute(["scrapy","crawl","stock","-o","items.json"])
E:\stockstar>scrapy crawl stock -o items.json
在代码里可设置断点(如在 spiders/stock.py 内),然后单击“Run”选项按钮→在弹出的菜单中选择“Debug‘main’”命令,进行调试,如图 7 和图 8 所示。
版权说明:Copyright © 广州松河信息科技有限公司 2005-2025 版权所有 粤ICP备16019765号
广州松河信息科技有限公司 版权所有