Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 13891

improve dataframe performance for large datasets

$
0
0

I have a large datasets need to filter out root packages as below:

  1. sort data(package column) by string length.
  2. start from beginning, scan the following data, if it starts with current data, then mark it as False.
  3. repeat step 2 till end.

To improve performance, I add a flag column to keep track it's processed or not.

Sound like now the big-O is n instead of n square. anything we can improve?

The obvious one is the second for loop, now use continue to skip previous items, maybe more better way do do that?

import reimport pandas as pdimport tabulatedef dumpdf(df):    if len(df) == 0: return    df = df.reset_index(drop=True)    tab = tabulate.tabulate(df, headers='keys', tablefmt='psql',showindex=True)    print(tab)    returndef main():    data = [        ['A','com.example'],        ['A','com.example.a'],        ['A','com.example.b.c'],        ['A','com.fun'],        ['B','com.demo'],        ['B','com.demo.b.c'],        ['B','com.fun'],        ['B','com.fun.e'],        ['B','com.fun.f.g']        ]    df = pd.DataFrame(data,columns=['name','package'])    df ['flag'] = None    df = df.sort_values(by="package", key=lambda x: x.str.len()).reset_index(drop=True)    for idx,row in df.iterrows():        if row['flag'] == None:            df.loc[idx,'flag'] = True            for jdx, jrow in df.iterrows():                if jdx <= idx: continue                if row['name'] == jrow['name']:                    if jrow['package'].startswith(row['package']):                         df.loc[jdx,'flag'] = False    #    df = df[df['flag']]    df = df.groupby('name',as_index=False).agg({'package':'\n'.join})    dumpdf(df)    returnmain()

Viewing all articles
Browse latest Browse all 13891

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>