在pig中合并和覆盖数据集

uidvcgyl  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(285)

我有三组数据都是这种格式的(acctid:chararray, rule:chararray, value:charrarray)
设置1文件:

123;R1;r1 version set 1 123
123;R2;r2 version set 1 123
123;R3;r3 version set 1 123
124;R1;r1 version set 1 124
124;R2;r2 version set 1 124
124;R3;r3 version set 1 124

设置2文件://更改r2

123;R2;r2 version set 2 123
124;R2;r2 version set 2 124

设置3文件:

123;R4;r4 version set 3 123
124;R4;r4 version set 3 124

我需要合并数据以便:
在第一个数据集中,r2值被更改为第二个数据集中的值
删除r3值
添加r4值
然后我可以按帐户id进行分组并获得:
最终:

123;R1;r1 version set 1 123
123;R2;r2 version set 2 123
123;R4;r4 version set 3 123
124;R1;r1 version set 1 124
124;R2;r2 version set 2 124
124;R4;r4 version set 3 124

我尝试了各种连接和合并,但我不明白这是否可能。谢谢

iaqfqrcu

iaqfqrcu1#

试试这个,它会给你想要的输出

set_1 = LOAD '/home/abhis/set_1' USING PigStorage(';') AS (acctid:chararray, rule: chararray, value: chararray);
set_2 = LOAD '/home/abhis/set_2' USING PigStorage(';') AS (acctid:chararray, rule: chararray, value: chararray);
set_3 = LOAD '/home/abhis/set_3' USING PigStorage(';') AS (acctid:chararray, rule: chararray, value: chararray);

DATA_SET1 = FILTER set_1 BY (rule matches '.*R1.*');

DATA_SET2 = UNION DATA_SET1,set_2,set_3;
DATA_SET3 = ORDER DATA_SET2 by acctid,rule;
dump DATA_SET3;

输出

(123,R1,r1 version set 1 123)
(123,R2,r2 version set 2 123)
(123,R4,r4 version set 3 123)
(124,R1,r1 version set 1 124)
(124,R2,r2 version set 2 124)
(124,R4,r4 version set 3 124)

相关问题