org.sweble.wikitext.engine.WtEngineImpl.postprocess()方法的使用及代码示例

x33g5p2x  于2022-02-03 转载在 其他  
字(9.9k)|赞(0)|评价(0)|浏览(71)

本文整理了Java中org.sweble.wikitext.engine.WtEngineImpl.postprocess()方法的一些代码示例,展示了WtEngineImpl.postprocess()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。WtEngineImpl.postprocess()方法的具体详情如下:
包路径:org.sweble.wikitext.engine.WtEngineImpl
类名称:WtEngineImpl
方法名:postprocess

WtEngineImpl.postprocess介绍

[英]Takes wikitext and parses the wikitext for viewing. The following steps are performed:

  • Validation
  • Preprocessing (for viewing)
  • Entity substitution
  • Optional: Expansion
  • Parsing
  • Entity substitution
  • Postprocessing
    [中]获取wikitext并解析wikitext以供查看。执行以下步骤:
    *验证
    *预处理(用于查看)
    *实体替换
    *可选:扩展
    *解析
    *实体替换
    *后处理

代码示例

代码示例来源:origin: marcusklang/wikiforia

public static EngProcessedPage parseWikipage(WtEngineImpl engine, PageId pageId, String markup) throws EngineException {
  return engine.postprocess(pageId, markup, null);
}

代码示例来源:origin: stackoverflow.com

public String convertWikiText(String title, String wikiText, int maxLineLength) throws LinkTargetException, EngineException {
  // Set-up a simple wiki configuration
  WikiConfig config = DefaultConfigEnWp.generate();
  // Instantiate a compiler for wiki pages
  WtEngineImpl engine = new WtEngineImpl(config);
  // Retrieve a page
  PageTitle pageTitle = PageTitle.make(config, title);
  PageId pageId = new PageId(pageTitle, -1);
  // Compile the retrieved page
  EngProcessedPage cp = engine.postprocess(pageId, wikiText, null);
  TextConverter p = new TextConverter(config, maxLineLength);
  return (String)p.go(cp.getPage());
}

代码示例来源:origin: org.sweble.wikitext/swc-engine

public EngProcessedPage wmToAst(
    PageId pageId,
    String wikitext,
    ExpansionCallback callback) throws EngineException
{
  return engine.postprocess(pageId, wikitext, callback);
}

代码示例来源:origin: sweble/sweble-wikitext

public EngProcessedPage wmToAst(
    PageId pageId,
    String wikitext,
    ExpansionCallback callback) throws EngineException
{
  return engine.postprocess(pageId, wikitext, callback);
}

代码示例来源:origin: sweble/sweble-wikitext

static String run(File file, String fileTitle, String query) throws LinkTargetException, IOException, EngineException
  {
    // Set-up a simple wiki configuration
    WikiConfig config = DefaultConfigEnWp.generate();

    // Instantiate a compiler for wiki pages
    WtEngineImpl engine = new WtEngineImpl(config);

    // Retrieve a page
    PageTitle pageTitle = PageTitle.make(config, fileTitle);

    PageId pageId = new PageId(pageTitle, -1);

    String wikitext = FileUtils.readFileToString(file, Charset.defaultCharset().name());

    // Compile the retrieved page
    EngProcessedPage cp = engine.postprocess(pageId, wikitext, null);

    return XPath.query(cp, query);
  }
}

代码示例来源:origin: kermitt2/entity-fishing

/**
 * @return the content of the wiki text fragment with all markup removed
 */
public String toTextOnly(String wikitext, String lang) {
  String result = "";
  // get a compiler for wiki pages
  //WtEngineImpl engine = new WtEngineImpl(config);        
  WtEngineImpl engine = engines.get(lang);
  try {
    // Retrieve a page 
    // PL: no clue what is this page title thing ?? not even documented
    PageTitle pageTitle = PageTitle.make(configs.get(lang), "crap");
    PageId pageId = new PageId(pageTitle, -1);
    // Compile the retrieved page
    EngProcessedPage cp = engine.postprocess(pageId, wikitext, null);
    WikiTextConverter converter = new WikiTextConverter(configs.get(lang));
    result = (String)converter.go(cp.getPage());
  } catch(Exception e) {
    LOGGER.warn("Fail to parse MediaWiki text, lang is " + lang, e);
  }
  return trim(result);
}

代码示例来源:origin: kermitt2/entity-fishing

/**
 * @return the content of the wiki text fragment with all markup removed except links 
 * to internal wikipedia pages: external links to the internet are removed
 */
public String toTextWithInternalLinksOnly(String wikitext, String lang) {
  String result = "";
  // Instantiate a compiler for wiki pages
  //WtEngineImpl engine = new WtEngineImpl(config);        
  WtEngineImpl engine = engines.get(lang);
  try {
    // Retrieve a page 
    // PL: no clue what is this??
    PageTitle pageTitle = PageTitle.make(configs.get(lang), "crap");
    PageId pageId = new PageId(pageTitle, -1);
    // Compile the retrieved page
    EngProcessedPage cp = engine.postprocess(pageId, wikitext, null);
    WikiTextConverter converter = new WikiTextConverter(configs.get(lang));
    converter.addToKeep(WikiTextConverter.INTERNAL_LINKS);
    result = (String)converter.go(cp.getPage());
  } catch(Exception e) {
    LOGGER.warn("Fail to parse MediaWiki text, lang is " + lang, e);
  }
  return trim(result);
}

代码示例来源:origin: kermitt2/entity-fishing

/**
 * @return the content of the wiki text fragment with all markup removed except links 
 * to internal wikipedia articles : external links to the internet are removed, as well as
 * internal link not to an article (e.g. redirection, disambiguation page, category, ...)
 */
public String toTextWithInternalLinksArticlesOnly(String wikitext, String lang) {
  String result = "";
  // Instantiate a compiler for wiki pages
  //WtEngineImpl engine = new WtEngineImpl(config);        
  WtEngineImpl engine = engines.get(lang);
  try {
    // Retrieve a page 
    // PL: no clue what is this??
    PageTitle pageTitle = PageTitle.make(configs.get(lang), "crap");
    PageId pageId = new PageId(pageTitle, -1);
    // Compile the retrieved page
    EngProcessedPage cp = engine.postprocess(pageId, wikitext, null);
    WikiTextConverter converter = new WikiTextConverter(configs.get(lang));
    converter.addToKeep(WikiTextConverter.INTERNAL_LINKS_ARTICLES);
    result = (String)converter.go(cp.getPage());
  } catch(Exception e) {
    LOGGER.warn("Fail to parse MediaWiki text, lang is " + lang, e);
  }
  return trim(result);
}

代码示例来源:origin: kermitt2/entity-fishing

/**
 * @return the content of the wiki text fragment with all markup removed except links 
 * to internal wikipedia pages: external links to the internet are removed
 */ 
public String toTextWithInternalLinksAndCategoriesOnly(String wikitext, String lang) {
  String result = "";
  // Instantiate a compiler for wiki pages
  //WtEngineImpl engine = new WtEngineImpl(config);        
  WtEngineImpl engine = engines.get(lang);
  try {
    // Retrieve a page 
    // PL: no clue what is this??
    PageTitle pageTitle = PageTitle.make(configs.get(lang), "crap");
    PageId pageId = new PageId(pageTitle, -1);
    // Compile the retrieved page
    EngProcessedPage cp = engine.postprocess(pageId, wikitext, null);
    WikiTextConverter converter = new WikiTextConverter(configs.get(lang));
    converter.addToKeep(WikiTextConverter.INTERNAL_LINKS);
    converter.addToKeep(WikiTextConverter.CATEGORY_LINKS);
    result = (String)converter.go(cp.getPage());
  } catch(Exception e) {
    LOGGER.warn("Fail to parse MediaWiki text, lang is " + lang, e);
  }
  return trim(result);
}

代码示例来源:origin: kermitt2/entity-fishing

/**
 * @return the content of the wiki text fragment with all markup removed except links 
 * to internal wikipedia (external links to the internet are removed) and except emphasis 
 * (bold and italics)
 */
public String toTextWithInternalLinksEmphasisOnly(String wikitext, String lang) {
  String result = "";
  // Instantiate a compiler for wiki pages
  //WtEngineImpl engine = new WtEngineImpl(config);
  WtEngineImpl engine = engines.get(lang);    
  try {
    // Retrieve a page 
    // PL: no clue what is this??
    PageTitle pageTitle = PageTitle.make(configs.get(lang), "crap");
    PageId pageId = new PageId(pageTitle, -1);
    // Compile the retrieved page
    EngProcessedPage cp = engine.postprocess(pageId, wikitext, null);
    WikiTextConverter converter = new WikiTextConverter(configs.get(lang));
    converter.addToKeep(WikiTextConverter.INTERNAL_LINKS);
    converter.addToKeep(WikiTextConverter.BOLD);
    converter.addToKeep(WikiTextConverter.ITALICS);
    result = (String)converter.go(cp.getPage());
  } catch(Exception e) {
    LOGGER.warn("Fail to parse MediaWiki text, lang is " + lang, e);
  }
  return trim(result);
}

代码示例来源:origin: dkpro/dkpro-jwpl

/**
 * Returns CompiledPage produced by the SWEBLE parser using the
 * SimpleWikiConfiguration.
 *
 * @return the parsed page
 * @throws LinkTargetException
 * @throws EngineException if the wiki page could not be compiled by the parser
 * @throws JAXBException
 * @throws FileNotFoundException
 */
private static EngProcessedPage getCompiledPage(String text, String title, long revision) throws LinkTargetException, EngineException, FileNotFoundException, JAXBException
{
  WikiConfig config = DefaultConfigEnWp.generate();
  PageTitle pageTitle = PageTitle.make(config, title);
  PageId pageId = new PageId(pageTitle, revision);
  // Compile the retrieved page
  WtEngineImpl engine = new WtEngineImpl(config);
  // Compile the retrieved page
  return engine.postprocess(pageId, text, null);
}

代码示例来源:origin: sweble/sweble-wikitext

static String run(File file, String fileTitle, boolean renderHtml) throws IOException, LinkTargetException, EngineException
{
  // Set-up a simple wiki configuration
  WikiConfig config = DefaultConfigEnWp.generate();
  final int wrapCol = 80;
  // Instantiate a compiler for wiki pages
  WtEngineImpl engine = new WtEngineImpl(config);
  // Retrieve a page
  PageTitle pageTitle = PageTitle.make(config, fileTitle);
  PageId pageId = new PageId(pageTitle, -1);
  String wikitext = FileUtils.readFileToString(file, Charset.defaultCharset().name());
  // Compile the retrieved page
  EngProcessedPage cp = engine.postprocess(pageId, wikitext, null);
  if (renderHtml)
  {
    String ourHtml = HtmlRenderer.print(new MyRendererCallback(), config, pageTitle, cp.getPage());
    String template = IOUtils.toString(App.class.getResourceAsStream("/render-template.html"), "UTF8");
    String html = template;
    html = html.replace("{$TITLE}", StringTools.escHtml(pageTitle.getDenormalizedFullTitle()));
    html = html.replace("{$CONTENT}", ourHtml);
    return html;
  }
  else
  {
    TextConverter p = new TextConverter(config, wrapCol);
    return (String) p.go(cp.getPage());
  }
}

代码示例来源:origin: sweble/sweble-wikitext

EngProcessedPage cp = engine.postprocess(pageId, wikitext, null);

代码示例来源:origin: dkpro/dkpro-jwpl

/**
 * Returns CompiledPage produced by the SWEBLE parser using the SimpleWikiConfiguration.
 *
 * @return the parsed page
 * @throws WikiApiException Thrown if errors occurred.
 */
private EngProcessedPage getCompiledPage() throws WikiApiException
{
  EngProcessedPage cp;
  try{
    WtEngineImpl engine = new WtEngineImpl(this.wiki.getWikConfig());
    PageTitle pageTitle = PageTitle.make(this.wiki.getWikConfig(), this.getTitle().toString());
    PageId pageId = new PageId(pageTitle, -1);
    // Compile the retrieved page
    cp = engine.postprocess(pageId, this.getText(), null);
  } catch(Exception e){
    throw new WikiApiException(e);
  }
  return cp;
}

代码示例来源:origin: sweble/sweble-wikitext

EngProcessedPage cp = engine.postprocess(pageId, wikitext, null);

代码示例来源:origin: org.sweble.wikitext/swc-engine

pAst = postprocess(title, pAst, log);

代码示例来源:origin: org.sweble.wikitext/swc-engine

pAst = postprocess(title, pAst, log);

代码示例来源:origin: sweble/sweble-wikitext

pAst = postprocess(title, pAst, log);

代码示例来源:origin: sweble/sweble-wikitext

pAst = postprocess(title, pAst, log);

代码示例来源:origin: org.sweble.wikitext/swc-engine

pAst = postprocess(title, pAst, log);

相关文章

微信公众号

最新文章

更多