>>> d = [{'name': 'Alice', 'age': 1}]
>>> f = spark.createDataFrame(d)
>>> f.collect()
[Row(age=1, name=u'Alice')]
>>> from pyspark.sql import functions as F

现在要新增加一列newName

方法一

>>> ff = ff.withColumn('newName','===')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/spark-current/python/pyspark/sql/dataframe.py", line 1619, in withColumn
    assert isinstance(col, Column), "col should be Column"
AssertionError: col should be Column

会报错,说不是列

方法二

>>> ff = f.withColumn('newName',F.col('name') + '===')
>>> ff.collect()
[Row(age=1, name=u'Alice', newName=None)]

没有报错,但是新列值为None,但是针对于整数类型可以

>>> ff = ff.withColumn('newAge',F.col('age') + 1)
>>> ff.collect()
[Row(age=1, name=u'Alice', newName=None, newAge=2)]

方法三

>>> ff = ff.withColumn('newNameV2',F.lit('==='))
>>> ff.collect()
[Row(name=u'Alice', age=1, newNameV2=u'===')]

sql.functions.lit()函数,直接返回的是字面值

方法四

转化为rdd,用map函数增加

10-12 23:37