sparklyr/hive:如何正确使用regex(regexp\u replace)?

bkkx9g8r  于 2021-06-26  发布在  Hive
关注(0)|答案(2)|浏览(385)

考虑下面的例子

dataframe_test<- data_frame(mydate = c('2011-03-01T00:00:04.226Z', '2011-03-01T00:00:04.226Z'))

# A tibble: 2 x 1

                    mydate
                     <chr>
1 2011-03-01T00:00:04.226Z
2 2011-03-01T00:00:04.226Z

sdf <- copy_to(sc, dataframe_test, overwrite = TRUE)

> sdf

# Source:   table<dataframe_test> [?? x 1]

# Database: spark_connection

                    mydate
                     <chr>
1 2011-03-01T00:00:04.226Z
2 2011-03-01T00:00:04.226Z

我想修改角色 timestamp 所以它有一个更传统的格式。我试着用 regexp_replace 但它失败了。

> sdf <- sdf %>% mutate(regex = regexp_replace(mydate, '(\\d{4})-(\\d{2})-(\\d{2})T(\\d{2}):(\\d{2}):(\\d{2}).(\\d{3})Z', '$1-$2-$3 $4:$5:$6.$7'))
> sdf

# Source:   lazy query [?? x 2]

# Database: spark_connection

                    mydate                    regex
                     <chr>                    <chr>
1 2011-03-01T00:00:04.226Z 2011-03-01T00:00:04.226Z
2 2011-03-01T00:00:04.226Z 2011-03-01T00:00:04.226Z

有什么想法吗?正确的语法是什么?

yebdmbv4

yebdmbv41#

我很难用“”替换“.”,但最后它可以用:

mutate(myvar2=regexp_replace(myvar, "[.]", ""))
vcudknz3

vcudknz32#

spark sql和hive提供两种不同的功能: regexp_extract -它获取要提取的组的字符串、模式和索引。 regexp_replace -它需要一个字符串、模式和替换字符串。
前者可用于提取单个组,索引语义与for相同 java.util.regex.Matcher 为了 regexp_replace 模式必须匹配整个字符串,如果不匹配,则返回输入字符串:

sdf %>% mutate(
 regex = regexp_replace(mydate, '^([0-9]{4}).*', "$1"),
 regexp_bad = regexp_replace(mydate, '([0-9]{4})', "$1"))

## Source:   query [2 x 3]

## Database: spark connection master=local[8] app=sparklyr local=TRUE

## 

## # A tibble: 2 x 3

## mydate regex               regexp_bad

## <chr> <chr>                    <chr>

## 1 2011-03-01T00:00:04.226Z  2011 2011-03-01T00:00:04.226Z

## 2 2011-03-01T00:00:04.226Z  2011 2011-03-01T00:00:04.226Z

与…在一起时 regexp_extract 不需要:

sdf %>% mutate(regex = regexp_extract(mydate, '([0-9]{4})', 1))

## Source:   query [2 x 2]

## Database: spark connection master=local[8] app=sparklyr local=TRUE

## 

## # A tibble: 2 x 2

## mydate regex

## <chr> <chr>

## 1 2011-03-01T00:00:04.226Z  2011

## 2 2011-03-01T00:00:04.226Z  2011

另外,由于间接执行(r->java),您必须转义两次:

sdf %>% mutate(
  regex = regexp_replace(
    mydate, 
    '^(\\\\d{4})-(\\\\d{2})-(\\\\d{2})T(\\\\d{2}):(\\\\d{2}):(\\\\d{2}).(\\\\d{3})Z$',
    '$1-$2-$3 $4:$5:$6.$7'))

通常使用spark datetime函数:

spark_session(sc) %>%  
  invoke("sql",
    "SELECT *, DATE_FORMAT(CAST(mydate AS timestamp), 'yyyy-MM-dd HH:mm:ss.SSS') parsed from dataframe_test") %>% 
  sdf_register

## Source:   query [2 x 2]

## Database: spark connection master=local[8] app=sparklyr local=TRUE

## 

## # A tibble: 2 x 2

## mydate                  parsed

## <chr>                   <chr>

## 1 2011-03-01T00:00:04.226Z 2011-03-01 01:00:04.226

## 2 2011-03-01T00:00:04.226Z 2011-03-01 01:00:04.226

但遗憾的是 sparklyr 似乎在这方面非常有限,并将时间戳视为字符串。
另请参见使用hive命令更改df中的字符串并使用sparklyr进行变异。

相关问题