如何为嵌套目录结构定义分区外部表

aelbi1ox  于 2021-06-27  发布在  Hive
关注(0)|答案(1)|浏览(291)

对于存储在 hdfs 在一个 year/*.csv 结构如下:

$ hdfs dfs -ls air/

    Found 21 items
air/year=2000
    drwxr-xr-x   - hadoop hadoop          0 2019-03-08 01:45 air/year=2001
    drwxr-xr-x   - hadoop hadoop          0 2019-03-08 01:45 air/year=2002
    drwxr-xr-x   - hadoop hadoop          0 2019-03-08 01:45 air/year=2003
    drwxr-xr-x   - hadoop hadoop          0 2019-03-08 01:45 air/year=2004
    drwxr-xr-x   - hadoop hadoop          0 2019-03-08 01:45 air/year=2005
    drwxr-xr-x   - hadoop hadoop          0 2019-03-08 01:45 air/year=2006
    drwxr-xr-x   - hadoop hadoop          0 2019-03-08 01:45 air/year=2007
    drwxr-xr-x   - hadoop hadoop          0 2019-03-08 01:45 air/year=2008

有12个 csv 档案-每月一份。因为我们的查询不关心月份粒度,所以可以将一年中的所有月份都放到一个目录中。以下是其中一年的内容:请注意 .csv 文件夹:

[hadoop@ip-172-31-25-82 ~]$ hdfs dfs -ls air/year=2008

Found 10 items
-rw-r--r--   2 hadoop hadoop  193893785 2019-03-07 23:49 air/year=2008/On_Time_On_Time_Performance_2008_1.csv
-rw-r--r--   2 hadoop hadoop  199126288 2019-03-07 23:49 air/year=2008/On_Time_On_Time_Performance_2008_10.csv
-rw-r--r--   2 hadoop hadoop  182225240 2019-03-07 23:49 air/year=2008/On_Time_On_Time_Performance_2008_2.csv
-rw-r--r--   2 hadoop hadoop  197399305 2019-03-07 23:49 air/year=2008/On_Time_On_Time_Performance_2008_3.csv
-rw-r--r--   2 hadoop hadoop  191321415 2019-03-07 23:49 air/year=2008/On_Time_On_Time_Performance_2008_4.csv
-rw-r--r--   2 hadoop hadoop  194141438 2019-03-07 23:49 air/year=2008/On_Time_On_Time_Performance_2008_5.csv
-rw-r--r--   2 hadoop hadoop  195477306 2019-03-07 23:49 air/year=2008/On_Time_On_Time_Performance_2008_6.csv
-rw-r--r--   2 hadoop hadoop  201148079 2019-03-07 23:49 air/year=2008/On_Time_On_Time_Performance_2008_7.csv
-rw-r--r--   2 hadoop hadoop  219060870 2019-03-07 23:49 air/year=2008/On_Time_On_Time_Performance_2008_8.csv
-rw-r--r--   2 hadoop hadoop  172127584 2019-03-07 23:49 air/year=2008/On_Time_On_Time_Performance_2008_9.csv

标题和一行如下所示:

hdfs dfs -cat airlines/2008/On_Time_On_Time_Performance_2008_4.csv | head -n 2

  "Year","Quarter","Month","DayofMonth","DayOfWeek","FlightDate","UniqueCarrier","AirlineID","Carrier","TailNum","FlightNum","Origin","OriginCityName","OriginState","OriginStateFips","OriginStateName","OriginWac","Dest","DestCityName","DestState","DestStateFips","DestStateName","DestWac","CRSDepTime","DepTime","DepDelay","DepDelayMinutes","DepDel15","DepartureDelayGroups","DepTimeBlk","TaxiOut","WheelsOff","WheelsOn","TaxiIn","CRSArrTime","ArrTime","ArrDelay","ArrDelayMinutes","ArrDel15","ArrivalDelayGroups","ArrTimeBlk","Cancelled","CancellationCode","Diverted","CRSElapsedTime","ActualElapsedTime","AirTime","Flights","Distance","DistanceGroup","CarrierDelay","WeatherDelay","NASDelay","SecurityDelay","LateAircraftDelay",

2008,2,4,3,4,2008-04-03,"WN",19393,"WN","N601WN","3599","MAF","Midland/Odessa, TX","TX","48","Texas",74,"DAL","Dallas, TX","TX","48","Texas",74,"1115","1112",-3.00,0.00,0.00,-1,"1100-1159",10.00,"1122","1218",6.00,"1220","1224",4.00,4.00,0.00,0,"1200-1259",0.00,"",0.00,65.00,72.00,56.00,1.00,319.00,2,,,,,,

问题是:如何“说服” hive / spark 正确阅读这些内容?方法是:
最后一列 year 由于 partitioning 第一列 YearIn 将是一个占位符:它的值将被读入,但我的应用程序代码将忽略它以支持 year 分隔柱
处理所有其他字段时没有任何特殊考虑
这是我的尝试。

create external table air (
YearIn string,Quarter string,Month string, 
 .. _long list of columns_ ..) 
partitioned by (year int) 
row format delimited fields terminated by ',' location '/user/hadoop/air/';

结果是:
表是由两个用户创建和访问的 hive 还有“Spark”
但这张table是空的——正如双方所说 hive 以及 spark 在这个过程中什么是不正确的?

lokaqttq

lokaqttq1#

表定义看起来不错,除了标题。如果不跳过标题,则将在数据集中返回标题行,如果某些列不是字符串,则将选择标题值作为 NULL s。要跳过被选中的头,请将此添加到表ddl的末尾 tblproperties("skip.header.line.count"="1") -此属性仅在配置单元中受支持,请参阅此解决方法:https://stackoverflow.com/a/54542483/2700344
除了创建表之外,还需要创建分区。
使用 MSCK [REPAIR] TABLE Air; 命令。
amazon elastic mapreduce(emr)版本的hive上的等效命令是: ALTER TABLE Air RECOVER PARTITIONS .
这将添加配置单元分区元数据。请参阅此处的手册:恢复分区

相关问题