本文介绍了如何将 PySpark 数据帧的每个非字符串列与浮点常量相除或相乘?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我的输入数据框如下所示
My input dataframe looks like the below
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("Basics").getOrCreate()
df=spark.createDataFrame(data=[('Alice',4.300,None),('Bob',float('nan'),897)],schema=['name','High','Low'])
+-----+----+----+
| name|High| Low|
+-----+----+----+
|Alice| 4.3|null|
| Bob| NaN| 897|
+-----+----+----+
除以 10.0 时的预期输出
Expected Output if divided by 10.0
+-----+----+----+
| name|High| Low|
+-----+----+----+
|Alice| 0.43|null|
| Bob| NaN| 89.7|
+-----+----+----+
推荐答案
我不知道有什么库函数可以做到这一点,但这个片段似乎做得很好:
I don't know about any library function that could do this, but this snippet seems to do job just fine:
CONSTANT = 10.0
for field in df.schema.fields:
if str(field.dataType) in ['DoubleType', 'FloatType', 'LongType', 'IntegerType', 'DecimalType']:
name = str(field.name)
df = df.withColumn(name, col(name)/CONSTANT)
df.show()
输出:
+-----+----+----+
| name|High| Low|
+-----+----+----+
|Alice|0.43|null|
| Bob| NaN|89.7|
+-----+----+----+
这篇关于如何将 PySpark 数据帧的每个非字符串列与浮点常量相除或相乘?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!