用多线程代码防止dos攻击

xbp102n0  于 2021-05-27  发布在  Hadoop
关注(0)|答案(1)|浏览(338)

让我概括一下我的问题:
大约有4000台服务器,每台服务器有数百万个URL。我的代码需要命中每个url,并将响应代码与url一起写入hdfs文件系统。
这里也添加了一些内容:检查发送到网页的请求数
我使用的是400线程的生产者-消费者模型。这段代码最近对一些web服务器造成了dos攻击,我很难找出问题所在:
主要类别:

public void readURLS(final Path inputPath, final Path outputPath) {
    LOG.info("Looking for files to download, queue size: {}, DOWNLOAD_THREADS: {}", queueSize, producerThreads);
    final List<Path> files = HdfsUtils.listDirectory(inputPath, hadoopConf);
    final BlockingQueue<String> queue = new LinkedBlockingQueue<>(queueSize);
    final UrlConsumerWriter consumerWriter =
            new UrlConsumerWriter(queue, outputPath, hadoopConf);

    LOG.info("Starting download of {} files from: '{}'", files.size(), inputPath);
    final ExecutorService writerPool = DownloadUtils.createWriterPool();
    CompletableFuture<Void> producer = downloadFilesToQueue(files, queue)
            .thenRun(consumerWriter::notifyProducersDone);
    CompletableFuture<Void> consumer =
            CompletableFuture.runAsync(consumerWriter, writerPool)// Cancel download workers if write worker fails
                    .whenComplete((result, err) -> {
                        if (err != null) {
                            LOG.error("Consumer Write worker failed!", err);
                            producer.cancel(true);
                        }
                    });

    writerPool.shutdown();
    producer.join();
    consumer.join();
    LOG.info("Url Validation Job Complete!!!");
}

private CompletableFuture<Void> downloadFilesToQueue(
        final List<Path> files,
        final BlockingQueue<String> downloadQueue
) {
    final ExecutorService pool = DownloadUtils.createDownloadPool(producerThreads);

    final List<CompletableFuture<Void>> workers = files.stream()
            .map(file -> new UrlDownloadWorker(clock, file, hadoopConf, downloadQueue,
                    utils, (validatorImpl.emptyTable())))
            .map(worker -> CompletableFuture.runAsync(worker, pool))
            .collect(Collectors.toList());

    pool.shutdown();

    final CompletableFuture<Void> allDownloads = CompletableFuture.allOf(workers.toArray(new CompletableFuture[0]));

    // When one worker fails, cancel all the other immediately
    for (final CompletableFuture<Void> worker : workers) {
        worker.whenComplete((v, err) -> {
            if (err != null) {
                LOG.error("Download worker failed!", err);
                allDownloads.cancel(true);
            }
        });
    }

    return allDownloads;
}

制片人级别:

@Override
    public void run() {
        LOG.info("Starting download worker for file: '{}'", file);
        long numLines = 0;

        try (BufferedReader reader = new BufferedReader(new InputStreamReader(
                file.getFileSystem(hadoopConf).open(file), CHARSET))) {
            String line;
            while ((line = reader.readLine()) != null) {
               // LOG.info("Thread {} Reading file: '{}'",Thread.currentThread().getName(), file);

                if (Thread.interrupted()) {
                    throw new InterruptedException();
                }
                StringBuilder builder = new StringBuilder();

                //write into database
                final StatusCode statusCode = utils.validateURL(line);

                if (statusCode != null) {
                        queue.put(builder.append(line)
                                .append(",")
                                .append(statusCode.name()).toString());

                    builder.setLength(0);
                } else {
                    throw new UrlValidationException(
                            "Failed to validate url :'" + line + "'");
                }
                numLines++;
            }

        } catch (IOException e) {
            throw new DownloadException(file, e);
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
            throw new DownloadException("Interrupted while downloading", file, e);
        }
        LOG.info("Download of {} lines complete for file: '{}'", numLines, file);
    }

urlvalidationutils类:

public final class UrlValidationUtils {
    private static final String WEBSITENOTCHECK = "uncheck.org";
    private final Map<String, StatusCode> blockedHosts = new ConcurrentHashMap<>();
    private static final int MAX_REDIRECT = 4;

    public StatusCode validateURL(String url) throws IOException {
        return validate(url, MAX_REDIRECT);
    }

    private StatusCode validate(String url, int maxRedirect) throws IOException {
        URL urlValue = new URL(url);
        HttpURLConnection con;

        if (url.contains(WEBSITENOTCHECK)) {
            blockedHosts.put(urlValue.getHost(), StatusCode.SUCCESS);
        }
        //first check if the host is already marked as invalid
//        if (blockedHosts.containsKey(urlValue.getHost())) {
//            return blockedHosts.get(urlValue.getHost());
//        }
        StatusCode statusCode;
        con = (HttpURLConnection) urlValue.openConnection();

        try {
            int resCode;
            con.setInstanceFollowRedirects(false);
            con.setConnectTimeout(3000); //set timeout to 3 seconds
            con.connect();
            resCode = con.getResponseCode();

            LOG.info("thread name {} connection id {} url {} ", Thread.currentThread().getName(), con.toString(), url);
            if (resCode == HttpURLConnection.HTTP_OK) {
                statusCode = StatusCode.SUCCESS;
            } else if (resCode == HttpURLConnection.HTTP_SEE_OTHER || resCode == HttpURLConnection.HTTP_MOVED_PERM
                    || resCode == HttpURLConnection.HTTP_MOVED_TEMP) {
                String location = con.getHeaderField("Location");
                if (location.startsWith("/")) {
                    location = urlValue.getProtocol() + "://" + urlValue.getHost() + location;
                }
                statusCode = validateRedirect(location, maxRedirect - 1, con);

            } else {
                blockedHosts.put(urlValue.getHost(), StatusCode.FAIL);
                statusCode = StatusCode.FAIL;
            }
        } catch (UnknownHostException e) {
            blockedHosts.put(urlValue.getHost(), StatusCode.UNKOWNHOST);
            statusCode = StatusCode.UNKOWNHOST;
        } catch (ConnectException e) {
            blockedHosts.put(urlValue.getHost(), StatusCode.CONNECTION_ISSUE);
            statusCode = StatusCode.CONNECTION_ISSUE;
        } catch (IOException e) {
            //if an IOException is caught possible reason is SOCKETTIMEOUT
            blockedHosts.put(urlValue.getHost(), StatusCode.SOCKETTIMEOUT);
            statusCode = StatusCode.SOCKETTIMEOUT;
        }
        con.disconnect();
        LOG.info("thread name {} connection id {} url {} ", Thread.currentThread().getName(), con.toString(), url);

        return statusCode;
    }

    private StatusCode validateRedirect(String location, int redirectCount, HttpURLConnection connection)
            throws IOException {
        if (redirectCount >= 0) {
            connection.disconnect();
            return validate(location, redirectCount);
        }
        return StatusCode.FAIL;

    }

}
3yhwsihp

3yhwsihp1#

为了避免服务器过载,我建议在访问一批url之前等待几毫秒。例如,在点击n个URL之后,您可以等待20毫秒,然后点击下一个n。。等等。批处理的大小(n)取决于您知道服务器在一秒钟内可以处理多少个请求。在性能方面,您是否为他们签订了服务级别协议?

相关问题