cdh3u6单节点群集datanode启动抛出错误

zvms9eto  于 2021-06-02  发布在  Hadoop
关注(0)|答案(0)|浏览(138)

我得到以下错误-

STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = atuls-macbook-air.local/192.168.0.22
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-cdh3u6
STARTUP_MSG:   build = git://ubuntu-slave01/var/lib/jenkins/workspace/CDH3u6-Full-RC/build/cdh3/hadoop20/0.20.2-cdh3u6/source -r efb405d2aa54039bdf39e0733cd0bb9423a1eb0a; compiled by 'jenkins' on Wed Mar 20 11:45:36 PDT 2013

************************************************************/

2014-10-31 09:06:49,252 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.ExceptionInInitializerError
    at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:231)
    at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:309)
    at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:635)
    at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:544)
    at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1757)
    at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1750)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1618)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:255)
    at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:226)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1680)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1635)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1653)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1779)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1796)
Caused by: java.lang.NumberFormatException: For input string: "558:feed::1"
    at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
    at java.lang.Integer.parseInt(Integer.java:492)
    at java.lang.Integer.parseInt(Integer.java:527)
    at com.sun.jndi.dns.DnsClient.<init>(DnsClient.java:125)
    at com.sun.jndi.dns.Resolver.<init>(Resolver.java:61)
    at com.sun.jndi.dns.DnsContext.getResolver(DnsContext.java:570)
    at com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:430)
    at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:231)
    at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:139)
    at com.sun.jndi.toolkit.url.GenericURLDirContext.getAttributes(GenericURLDirContext.java:103)
    at sun.security.krb5.KrbServiceLocator.getKerberosService(KrbServiceLocator.java:87)
    at sun.security.krb5.Config.checkRealm(Config.java:1295)
    at sun.security.krb5.Config.getRealmFromDNS(Config.java:1268)
    at sun.security.krb5.Config.getDefaultRealm(Config.java:1162)
    at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:81)
    ... 14 more

2014-10-31 09:06:49,253 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at atuls-macbook-air.local/192.168.0.22

************************************************************/

我需要一个特殊的用户还是我的设置有问题?有没有我缺少的设置?我更改了目录权限。
这是我的hdfs-site.xml文件-

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <property>
     <name>dfs.permissions</name>
     <value>false</value>
  </property>
  <!-- Immediately exit safemode as soon as one DataNode checks in. 
       On a multi-node cluster, these configurations must be removed.  -->
  <property>
    <name>dfs.safemode.extension</name>
    <value>0</value>
  </property>
  <property>
     <name>dfs.safemode.min.datanodes</name>
     <value>1</value>
  </property>
  <property>
     <!-- specify this so that running 'hadoop namenode -format' formats the right dir -->
     <name>dfs.name.dir</name>
     <value>/Users/atul/hadoop_dfs/name</value>
  </property>
  <property>
     <name>dfs.data.dir</name>
     <value>/Users/atul/hadoop_dfs/data</value>
  </property>

  <!-- Enable Hue Plugins -->
  <property>
    <name>dfs.namenode.plugins</name>
    <value>org.apache.hadoop.thriftfs.NamenodePlugin</value>
    <description>Comma-separated list of namenode plug-ins to be activated.
    </description>
  </property>
  <property>
    <name>dfs.datanode.plugins</name>
    <value>org.apache.hadoop.thriftfs.DatanodePlugin</value>
    <description>Comma-separated list of datanode plug-ins to be activated.
    </description>
  </property>
  <property>
    <name>dfs.thrift.address</name>
    <value>0.0.0.0:10090</value>
  </property>
</configuration>

在这方面如有任何帮助,我将不胜感激。我需要运行cdh3u6,因此这个版本。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题