I created this example notebook to demonstrate the potential danger when dealing with mutable objects in pyspark udfs.

In the above notebook, we can see that python lists in udfs are behaving like just pointers. For group in the aggregation, we see that the lists in the same values in column b are behaving like the same list, thus pointer like.

To solve this problem, we can do a few things.

1. Cache the dataframe after aggregation.

sdf_2 = sdf.groupby("language", "b").agg(F.max("b").alias("combined")).cache()

2. Make a copy of the mutable object.

sch = T.ArrayType(T.IntegerType())

@F.udf(returnType=sch)

No backlinks identified. Reference this note using the Note ID til/data/pyspark.beware-of-mutable-objects.md in other notes to connect them.