使用nutch内容限制的建议

mum43rcc  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(340)

我正在使用nutch2.1对整个域进行爬网(例如company.com)。我曾经遇到过这样一个问题:由于apachenutch中设置的内容限制,我没有获得所有想要爬网的链接。通常,当我检查内容时,只有页面的上半部分存储在数据库中,因此下半部分的链接没有被获取。
为了解决这个问题,我修改了nutch-site.xml,使内容限制如下所示:

<property>
    <name>http.content.limit</name>
    <value>-1</value>
    <description>The length limit for downloaded content using the http
    protocol, in bytes. If this value is nonnegative (>=0), content longer
    than it will be truncated; otherwise, no truncation at all. Do not
    confuse this setting with the file.content.limit setting.
    </description>
</property>

这样做解决了问题,但在某个时候,我遇到了outofmemory错误,解析时的输出证明了这一点:

ParserJob: starting
ParserJob: resuming:    false
ParserJob: forced reparse:  false
ParserJob: parsing all
Exception in thread "main" java.lang.RuntimeException: job failed: name=parse, jobid=job_local_0001
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:54)
at org.apache.nutch.parse.ParserJob.run(ParserJob.java:251)
at org.apache.nutch.parse.ParserJob.parse(ParserJob.java:259)
at org.apache.nutch.parse.ParserJob.run(ParserJob.java:302)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.parse.ParserJob.main(ParserJob.java:306)

下面是我的hadoop.log(靠近错误的部分):

2016-01-22 02:02:35,898 INFO  crawl.SignatureFactory - Using Signature impl: org.apache.nutch.crawl.MD5Signature
2016-01-22 02:02:37,255 WARN  util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-01-22 02:02:39,130 INFO  mapreduce.GoraRecordReader - gora.buffer.read.limit = 10000
2016-01-22 02:02:39,255 INFO  mapreduce.GoraRecordWriter - gora.buffer.write.limit = 10000
2016-01-22 02:02:39,322 INFO  crawl.SignatureFactory - Using Signature impl: org.apache.nutch.crawl.MD5Signature
2016-01-22 02:02:53,018 WARN  mapred.FileOutputCommitter - Output path is null in cleanup
2016-01-22 02:02:53,031 WARN  mapred.LocalJobRunner - job_local_0001
java.lang.OutOfMemoryError: Java heap space
    at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3051)
    at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2991)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3532)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:943)
    at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1441)
    at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2936)
    at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:477)
    at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2631)
    at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1800)
    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2221)
    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2624)
    at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2127)
    at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2293)
    at org.apache.gora.sql.store.SqlStore.execute(SqlStore.java:423)
    at org.apache.gora.query.impl.QueryBase.execute(QueryBase.java:71)
    at org.apache.gora.mapreduce.GoraRecordReader.executeQuery(GoraRecordReader.java:66)
    at org.apache.gora.mapreduce.GoraRecordReader.nextKeyValue(GoraRecordReader.java:102)
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
    at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
    at org.apache.hadoop.map

我只有在将内容限制设置为-1时才遇到这个问题。然而,如果我不这样做,有一个机会,我不会得到我所有的链接,我想爬网。关于如何使用内容限制有什么建议吗?这样做真的不明智吗?如果是的话,我可以使用哪些可能的替代方案?谢谢!

wr98u20j

wr98u20j1#

问题是您将爬行深度设置为unlimited(-1)。当你的爬虫系统点击重网址,如 https://en.wikipedia.org, https://wikipedia.org and https://en.wikibooks.org ,在爬网过程中,系统可能会耗尽内存。您应该通过设置nutch\u heapsize环境变量值来增加nuch的内存 e.g., export NUTCH_HEAPSIZE=4000 (详见nutch脚本)。注意,这个值相当于hadoop的hadoop\u heapsize。如果它仍然不起作用,您应该增加系统中的物理内存^^
希望这有帮助,
勒库克多

相关问题