Answer a question

I want to pivot a spark dataframe, I refer pyspark documentation, and based on pivot function, the clue is .groupBy('name').pivot('name', values=None). Here's my dataset,

 In[75]:  spDF.show()
 Out[75]:

+-----------+-----------+
|customer_id|       name|
+-----------+-----------+
|      25620| MCDonnalds|
|      25620|  STARBUCKS|
|      25620|        nan|
|      25620|        nan|
|      25620| MCDonnalds|
|      25620|        nan|
|      25620| MCDonnalds|
|      25620|DUNKINDONUT|
|      25620|   LOTTERIA|
|      25620|        nan|
|      25620| MCDonnalds|
|      25620|DUNKINDONUT|
|      25620|DUNKINDONUT|
|      25620|        nan|
|      25620|        nan|
|      25620|        nan|
|      25620|        nan|
|      25620|   LOTTERIA|
|      25620|   LOTTERIA|
|      25620|  STARBUCKS|
+-----------+-----------+
only showing top 20 rows

And then I try to di pivot the table name

In [96]:
spDF.groupBy('name').pivot('name', values=None)
Out[96]:
<pyspark.sql.group.GroupedData at 0x7f0ad03750f0>

And when I try to show them

In [98]:
spDF.groupBy('name').pivot('name', values=None).show()
Out [98]:

    ---------------------------------------------------------------------------
AttributeError                       Traceback (most recent call last)
<ipython-input-98-94354082e956> in <module>()
----> 1 spDF.groupBy('name').pivot('name', values=None).show()
AttributeError: 'GroupedData' object has no attribute 'show'

I don't know why 'GroupedData' can't be shown, what should I do to solve the issue?

Answers

The pivot() method returns a GroupedData object, just like groupBy(). You cannot use show() on a GroupedData object without using an aggregate function (such as sum() or even count()) on it before.

See this article for more information

Logo

Python社区为您提供最前沿的新闻资讯和知识内容

更多推荐