cascading.flow.Flow.getID()方法的使用及代码示例

x33g5p2x  于2022-01-19 转载在 其他  
字(4.7k)|赞(0)|评价(0)|浏览(116)

本文整理了Java中cascading.flow.Flow.getID()方法的一些代码示例,展示了Flow.getID()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Flow.getID()方法的具体详情如下:
包路径:cascading.flow.Flow
类名称:Flow
方法名:getID

Flow.getID介绍

[英]Method getID returns the ID of this Flow object.

The ID value is a long HEX String used to identify this instance globally. Subsequent Flow instances created with identical parameters will not return the same ID.
[中]方法getID返回此流对象的ID。
ID值是一个长的十六进制字符串,用于全局标识此实例。使用相同参数创建的后续流实例将不会返回相同的ID。

代码示例

代码示例来源:origin: twitter/ambrose

/**
 * The onStarting event is fired when a Flow instance receives the start() message. A Flow is cut
 * down into executing units called stepFlow. A stepFlow contains a stepFlowJob which represents
 * the mapreduce job to be submitted to Hadoop. The ambrose graph is constructed from the step
 * graph found in flow object.
 *
 * @param flow the flow.
 */
@Override
@SuppressWarnings("unchecked")
public void onStarting(Flow flow) {
 // init flow
 List<FlowStep> steps = flow.getFlowSteps();
 totalNumberOfJobs = steps.size();
 currentFlowId = flow.getID();
 Properties props = new Properties();
 props.putAll(flow.getConfigAsProperties());
 try {
  statsWriteService.initWriteService(props);
 } catch (IOException e) {
  LOG.error("Failed to initialize statsWriteService", e);
 }
 // convert graph from cascading to ambrose
 AmbroseCascadingGraphConverter converter =
   new AmbroseCascadingGraphConverter(Flows.getStepGraphFrom(flow), nodesByName);
 converter.convert();
 AmbroseUtils.sendDagNodeNameMap(statsWriteService, currentFlowId, nodesByName);
}

代码示例来源:origin: twitter/ambrose

currentFlowId = flow.getID();

代码示例来源:origin: cwensel/cascading

@Override
public String getID()
 {
 return flow.getID();
 }

代码示例来源:origin: cwensel/cascading

public void setFlow( Flow<Config> flow )
 {
 this.flow = flow;
 this.flowID = flow.getID();
 this.flowName = flow.getName();
 }

代码示例来源:origin: cwensel/cascading

public static String getNameOrID( Flow flow )
 {
 if( flow == null )
  return null;
 if( flow.getName() != null )
  return flow.getName();
 return flow.getID().substring( 0, 6 );
 }

代码示例来源:origin: cascading/lingual-core

@Override
public void cancel() throws SQLException
 {
 try
  {
  if( !parent.isClosed() )
   parent.cancel();
  }
 finally
  {
  Flow flow = lingualConnection.getCurrentFlow();
  if( flow != null )
   {
   LOG.info( "stopping flow: {}", flow.getID() );
   flow.stop();
   }
  }
 }

代码示例来源:origin: com.twitter.ambrose/ambrose-cascading

/**
 * The onStarting event is fired when a Flow instance receives the start() message. A Flow is cut
 * down into executing units called stepFlow. A stepFlow contains a stepFlowJob which represents
 * the mapreduce job to be submitted to Hadoop. The ambrose graph is constructed from the step
 * graph found in flow object.
 *
 * @param flow the flow.
 */
@Override
@SuppressWarnings("unchecked")
public void onStarting(Flow flow) {
 // init flow
 List<FlowStep> steps = flow.getFlowSteps();
 totalNumberOfJobs = steps.size();
 currentFlowId = flow.getID();
 Properties props = new Properties();
 props.putAll(flow.getConfigAsProperties());
 try {
  statsWriteService.initWriteService(props);
 } catch (IOException e) {
  LOG.error("Failed to initialize statsWriteService", e);
 }
 // convert graph from cascading to ambrose
 AmbroseCascadingGraphConverter converter =
   new AmbroseCascadingGraphConverter(Flows.getStepGraphFrom(flow), nodesByName);
 converter.convert();
 AmbroseUtils.sendDagNodeNameMap(statsWriteService, currentFlowId, nodesByName);
}

代码示例来源:origin: com.twitter.ambrose/ambrose-cascading3

currentFlowId = flow.getID();

代码示例来源:origin: cwensel/cascading

@Test
 public void testFlowID() throws Exception
  {
  Tap source = new Lfs( new TextLine(), "input/path" );
  Tap sink = new Hfs( new TextLine(), "output/path", SinkMode.REPLACE );

  Pipe pipe = new Pipe( "test" );

  Map<Object, Object> props = getProperties();
  Flow flow1 = getPlatform().getFlowConnector( props ).connect( source, sink, pipe );

//    System.out.println( "flow.getID() = " + flow1.getID() );

  assertNotNull( "missing id", flow1.getID() );

  assertNotNull( "missing id in conf", flow1.getProperty( "cascading.flow.id" ) );

  Flow flow2 = getPlatform().getFlowConnector( props ).connect( source, sink, pipe );

  assertTrue( "same id", !flow1.getID().equalsIgnoreCase( flow2.getID() ) );
  }

代码示例来源:origin: cascading/cascading-hadoop2-common

@Test
 public void testFlowID() throws Exception
  {
  Tap source = new Lfs( new TextLine(), "input/path" );
  Tap sink = new Hfs( new TextLine(), "output/path", SinkMode.REPLACE );

  Pipe pipe = new Pipe( "test" );

  Map<Object, Object> props = getProperties();
  Flow flow1 = getPlatform().getFlowConnector( props ).connect( source, sink, pipe );

//    System.out.println( "flow.getID() = " + flow1.getID() );

  assertNotNull( "missing id", flow1.getID() );

  assertNotNull( "missing id in conf", flow1.getProperty( "cascading.flow.id" ) );

  Flow flow2 = getPlatform().getFlowConnector( props ).connect( source, sink, pipe );

  assertTrue( "same id", !flow1.getID().equalsIgnoreCase( flow2.getID() ) );
  }

相关文章