org.apache.lucene.analysis.Token.getPositionIncrement()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(10.7k)|赞(0)|评价(0)|浏览(110)

本文整理了Java中org.apache.lucene.analysis.Token.getPositionIncrement()方法的一些代码示例,展示了Token.getPositionIncrement()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Token.getPositionIncrement()方法的具体详情如下:
包路径:org.apache.lucene.analysis.Token
类名称:Token
方法名:getPositionIncrement

Token.getPositionIncrement介绍

暂无

代码示例

代码示例来源:origin: org.apache.lucene/lucene-analyzers

@Override
public TokenPositioner getTokenPositioner(Token token) throws IOException {
 if (token.getPositionIncrement() == 0) {
  return TokenPositioner.newRow;
 } else {
  return TokenPositioner.newColumn;
 }
}

代码示例来源:origin: org.compass-project/compass

public int getPositionIncrement() {
  return token.getPositionIncrement();
}

代码示例来源:origin: hibernate/hibernate-search

public static void displayTokensWithFullDetails(Analyzer analyzer, String field, String text) throws IOException {
  Token[] tokens = tokensFromAnalysis( analyzer, field, text );
  StringBuilder builder = new StringBuilder();
  int position = 0;
  for ( Token token : tokens ) {
    int increment = token.getPositionIncrement();
    if ( increment > 0 ) {
      position = position + increment;
      builder.append( "\n" ).append( position ).append( ": " );
    }
    builder.append( "[" )
        .append( getTermText( token ) )
        .append( ":" )
        .append( token.startOffset() )
        .append( "->" )
        .append(
            token.endOffset()
        )
        .append( ":" )
        .append( token.type() )
        .append( "] " );
    log.debug( builder.toString() );
  }
}

代码示例来源:origin: org.infinispan/infinispan-embedded-query

public static void displayTokensWithFullDetails(Analyzer analyzer, String field, String text) throws IOException {
  Token[] tokens = tokensFromAnalysis( analyzer, field, text );
  StringBuilder builder = new StringBuilder();
  int position = 0;
  for ( Token token : tokens ) {
    int increment = token.getPositionIncrement();
    if ( increment > 0 ) {
      position = position + increment;
      builder.append( "\n" ).append( position ).append( ": " );
    }
    builder.append( "[" )
        .append( getTermText( token ) )
        .append( ":" )
        .append( token.startOffset() )
        .append( "->" )
        .append(
            token.endOffset()
        )
        .append( ":" )
        .append( token.type() )
        .append( "] " );
    log.debug( builder.toString() );
  }
}

代码示例来源:origin: hibernate/hibernate-search

/**
 * Utility to print out the tokens generated by a specific Analyzer on an example text.
 * You have to specify the field name as well, as some Analyzer(s) might have a different
 * configuration for each field.
 * This implementation is not suited for top performance and is not used by Hibernate Search
 * during automatic indexing: this method is only made available to help understanding
 * and debugging the analyzer chain.
 * @param analyzer the Analyzer to use
 * @param field the name of the field: might affect the Analyzer behaviour
 * @param text some sample input
 * @param printTo Human readable text will be printed to this output. Passing {@code System.out} might be a good idea.
 * @throws IOException if an I/O error occurs
 */
public static void displayTokensWithPositions(Analyzer analyzer, String field, String text, PrintStream printTo) throws IOException {
  Token[] tokens = tokensFromAnalysis( analyzer, field, text );
  int position = 0;
  for ( Token token : tokens ) {
    int increment = token.getPositionIncrement();
    if ( increment > 0 ) {
      position = position + increment;
      printTo.println();
      printTo.print( position + ": " );
    }
    log.debug( "[" + getTermText( token ) + "] " );
  }
}

代码示例来源:origin: org.infinispan/infinispan-embedded-query

/**
 * Utility to print out the tokens generated by a specific Analyzer on an example text.
 * You have to specify the field name as well, as some Analyzer(s) might have a different
 * configuration for each field.
 * This implementation is not suited for top performance and is not used by Hibernate Search
 * during automatic indexing: this method is only made available to help understanding
 * and debugging the analyzer chain.
 * @param analyzer the Analyzer to use
 * @param field the name of the field: might affect the Analyzer behaviour
 * @param text some sample input
 * @param printTo Human readable text will be printed to this output. Passing {@code System.out} might be a good idea.
 * @throws IOException if an I/O error occurs
 */
public static void displayTokensWithPositions(Analyzer analyzer, String field, String text, PrintStream printTo) throws IOException {
  Token[] tokens = tokensFromAnalysis( analyzer, field, text );
  int position = 0;
  for ( Token token : tokens ) {
    int increment = token.getPositionIncrement();
    if ( increment > 0 ) {
      position = position + increment;
      printTo.println();
      printTo.print( position + ": " );
    }
    log.debug( "[" + getTermText( token ) + "] " );
  }
}

代码示例来源:origin: org.apache.lucene/lucene-core-jfrog

/**
 * Returns the next input Token whose term() is not a stop word.
 */
public final Token next(final Token reusableToken) throws IOException {
 assert reusableToken != null;
 // return the first non-stop word found
 int skippedPositions = 0;
 for (Token nextToken = input.next(reusableToken); nextToken != null; nextToken = input.next(reusableToken)) {
  if (!stopWords.contains(nextToken.termBuffer(), 0, nextToken.termLength())) {
   if (enablePositionIncrements) {
    nextToken.setPositionIncrement(nextToken.getPositionIncrement() + skippedPositions);
   }
   return nextToken;
  }
  skippedPositions += nextToken.getPositionIncrement();
 }
 // reached EOS -- return null
 return null;
}

代码示例来源:origin: org.dspace.dependencies.solr/dspace-solr-core

Token tok1 = iter1.hasNext() ? iter1.next() : null;
Token tok2 = iter2.hasNext() ? iter2.next() : null;
int pos1 = tok1!=null ? tok1.getPositionIncrement() : 0;
int pos2 = tok2!=null ? tok2.getPositionIncrement() : 0;
while(tok1!=null || tok2!=null) {
 while (tok1 != null && (pos1 <= pos2 || tok2==null)) {
  pos=pos1;
  tok1 = iter1.hasNext() ? iter1.next() : null;
  pos1 += tok1!=null ? tok1.getPositionIncrement() : 0;
  pos=pos2;
  tok2 = iter2.hasNext() ? iter2.next() : null;
  pos2 += tok2!=null ? tok2.getPositionIncrement() : 0;

代码示例来源:origin: org.apache.lucene/com.springsource.org.apache.lucene

/**
 * Returns the next input Token whose term() is not a stop word.
 */
public final Token next(final Token reusableToken) throws IOException {
 assert reusableToken != null;
 // return the first non-stop word found
 int skippedPositions = 0;
 for (Token nextToken = input.next(reusableToken); nextToken != null; nextToken = input.next(reusableToken)) {
  if (!stopWords.contains(nextToken.termBuffer(), 0, nextToken.termLength())) {
   if (enablePositionIncrements) {
    nextToken.setPositionIncrement(nextToken.getPositionIncrement() + skippedPositions);
   }
   return nextToken;
  }
  skippedPositions += nextToken.getPositionIncrement();
 }
 // reached EOS -- return null
 return null;
}

代码示例来源:origin: org.dspace.dependencies.solr/dspace-solr-core

if (startOffset == -1) {
  startOffset = token.startOffset();
  firstPositionIncrement = token.getPositionIncrement();

代码示例来源:origin: ajermakovics/eclipse-instasearch

private void applyToken(Token token)
{
  termAtt.setTermBuffer(token.termBuffer(), 0, token.termLength());
  posAtt.setPositionIncrement(token.getPositionIncrement());
  offsetAtt.setOffset(token.startOffset(), token.endOffset());
}

代码示例来源:origin: org.compass-project/compass

nextToken.startOffset() + curOffset,
    nextToken.endOffset() + curOffset);
offsetToken.setPositionIncrement(nextToken.getPositionIncrement() + extra * 10);
return offsetToken;

代码示例来源:origin: treygrainger/solr-in-action

@Override
public boolean incrementToken() throws IOException {
 if (this.tokens == null) {
  String data = convertReaderToString(this.multiTextInput.Reader);
  if (data.equals("")) {
   return false;
  }
  // get tokens
  this.tokens = mergeToSingleTokenStream(createPositionsToTokensMap(
    this.namedAnalyzers, data));
  if (this.tokens == null) {
   // at end of stream for some reason
   return false;
  }
 }
 if (tokens.isEmpty()) {
  this.tokens = null;
  return false;
 } else {
  clearAttributes();
  Token token = tokens.removeFirst();
  this.charTermAttribute.copyBuffer(token.buffer(), 0, token.length());
  this.offsetAttribute.setOffset(token.startOffset(), token.endOffset()
    + this.startingOffset);
  this.typeAttribute.setType(token.type());
  this.positionAttribute.setPositionIncrement(token.getPositionIncrement());
  return true;
 }
}

代码示例来源:origin: lucene/lucene

try {
 for (Token t = stream.next(); t != null; t = stream.next()) {
  position += (t.getPositionIncrement() - 1);
  addPosition(fieldName, t.termText(), position++);
  if (++length > maxFieldLength) break;

代码示例来源:origin: org.dspace.dependencies.solr/dspace-solr-core

tokenNamedList.add("end", token.endOffset());
position += token.getPositionIncrement();
tokenNamedList.add("position", position);

代码示例来源:origin: org.dspace.dependencies.solr/dspace-solr-core

static NamedList<NamedList<Object>> getTokens(TokenStream tstream) throws IOException {
 // outer is namedList since order of tokens is important
 NamedList<NamedList<Object>> tokens = new NamedList<NamedList<Object>>();
 Token t = null;
 while (((t = tstream.next()) != null)) {
  NamedList<Object> token = new SimpleOrderedMap<Object>();
  tokens.add("token", token);
  token.add("value", new String(t.termBuffer(), 0, t.termLength()));
  token.add("start", t.startOffset());
  token.add("end", t.endOffset());
  token.add("posInc", t.getPositionIncrement());
  token.add("type", t.type());
  //TODO: handle payloads
 }
 return tokens;
}

代码示例来源:origin: org.infinispan/infinispan-embedded-query

private void setCurrentToken(Token token) {
 if (token == null) return;
 clearAttributes();
 termAtt.copyBuffer(token.buffer(), 0, token.length());
 posIncrAtt.setPositionIncrement(token.getPositionIncrement());
 flagsAtt.setFlags(token.getFlags());
 offsetAtt.setOffset(token.startOffset(), token.endOffset());
 typeAtt.setType(token.type());
 payloadAtt.setPayload(token.getPayload());
}

代码示例来源:origin: org.apache.lucene/lucene-analyzers

private void setCurrentToken(Token token) {
 if (token == null) return;
 clearAttributes();
 termAtt.copyBuffer(token.buffer(), 0, token.length());
 posIncrAtt.setPositionIncrement(token.getPositionIncrement());
 flagsAtt.setFlags(token.getFlags());
 offsetAtt.setOffset(token.startOffset(), token.endOffset());
 typeAtt.setType(token.type());
 payloadAtt.setPayload(token.getPayload());
}

代码示例来源:origin: org.apache.lucene/lucene-analyzers

@Override
public final boolean incrementToken() throws IOException {
 if (matrix == null) {
  matrix = new Matrix();
  // fill matrix with maximumShingleSize columns
  while (matrix.columns.size() < maximumShingleSize && readColumn()) {
   // this loop looks ugly
  }
 }
 // this loop exists in order to avoid recursive calls to the next method
 // as the complexity of a large matrix
 // then would require a multi gigabyte sized stack.
 Token token;
 do {
  token = produceNextToken(reusableToken);
 } while (token == request_next_token);
 if (token == null) return false;
 clearAttributes();
 termAtt.copyBuffer(token.buffer(), 0, token.length());
 posIncrAtt.setPositionIncrement(token.getPositionIncrement());
 flagsAtt.setFlags(token.getFlags());
 offsetAtt.setOffset(token.startOffset(), token.endOffset());
 typeAtt.setType(token.type());
 payloadAtt.setPayload(token.getPayload());
 return true;
}

代码示例来源:origin: org.dspace.dependencies.solr/dspace-solr-core

protected Token process(Token t) throws IOException {
  Token tok = read();
  while (tok != null && tok.getPositionIncrement()==0) {
   if (null != t) {
    write(t);
    t = null;
   }
   boolean dup=false;
   for (Token outTok : output()) {
    int tokLen = tok.termLength();
    if (outTok.termLength() == tokLen && ArraysUtils.equals(outTok.termBuffer(), 0, tok.termBuffer(), 0, tokLen)) {
     dup=true;
     //continue;;
    }
   }
   if (!dup){
    write(tok);
   }
   tok = read();
  }
  if (tok != null) {
   pushBack(tok);
  }
  return t;
 }
}

相关文章