question_id
int64 59.6M
70.5M
| question_title
stringlengths 15
150
| question_body
stringlengths 134
33.4k
| accepted_answer_id
int64 59.6M
73.3M
| question_creation_date
timestamp[us] | question_answer_count
int64 1
9
| question_favorite_count
float64 0
8
⌀ | question_score
int64 -6
52
| question_view_count
int64 10
79k
| tags
stringclasses 2
values | answer_body
stringlengths 48
16.3k
| answer_creation_date
timestamp[us] | answer_score
int64 -2
59
| link
stringlengths 31
107
| context
stringlengths 134
251k
| answer_start
int64 0
1.28k
| answer_end
int64 49
10.2k
| question
stringlengths 158
33.1k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
66,827,853
|
Using a loop to get a sum of computed values (using column math) for each dataframe entry in pandas
|
<p>I have the following data frame:</p>
<p><a href="https://i.stack.imgur.com/seaxm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/seaxm.png" alt="enter image description here" /></a></p>
<p>For each entry of this data frame, I am looking to get the sum of log(df['s_i']/df['s_18'])*log(i/18) for i in h = [18, 56, 98, 123, 148].</p>
<p>I tried the following:</p>
<pre><code>a = []
h = [18, 56, 98, 123, 148]
for i in df.index:
for zi in h:
a.append(log(df.loc[i,'s_'+str(zi)]/df.loc[i,'s_18'])*log(zi/18))
b = sum(a)
</code></pre>
<p>However, it did not work. Any idea how to solve this problem? Any help would be greatly appreciated!</p>
| 66,827,939
| 2021-03-27T03:53:02.977000
| 1
| null | -1
| 71
|
python|pandas
|
<p>Here one thing to note use <code>np.log<i> or math.log</code> <code>where is <i> base what ever you want</code> if you don't have any custom implementation <code>log</code> function.<br />
<code>zr</code> should to change to <code>18</code> as per your statement formula</p>
<pre><code>log(df['s_i']/df['s_18'])*log(i/18)
</code></pre>
<hr />
<p>Here i am using similar layout <code>dataframe</code>, <strong>please check out this code:</strong></p>
<pre><code>import numpy as np
import pandas as pd
h = [18, 56, 98, 123, 148]
df = pd.DataFrame(
data= np.arange(1, 51).reshape(10, 5),
columns= ['s_'+str(i) for i in h]
)
# here one point should be noted that `b += sum(a)` should be in outer `for-loop`
# otherwise as `a` grows at each iteration its `all items again added` so put
# `b += sum(a) in outer for-loop`
b = 0
for i in df.index:
a = []
for zi in h:
a.append(np.log(df.loc[i,'s_'+str(zi)]/df.loc[i,'s_18'])*np.log(zi/18))
b += sum(a)
print(b)
</code></pre>
<hr />
<p><strong>Method-2</strong></p>
<pre><code>equation = "np.log(df.loc[i,'s_'+str(j)]/df.loc[i,'s_18'])*np.log(j/18)"
b = 0
for i in df.index:
b += sum(list(map(lambda j: eval(equation), h)))
print(b)
</code></pre>
| 2021-03-27T04:11:22.310000
| 1
|
https://pandas.pydata.org/docs/getting_started/intro_tutorials/05_add_columns.html
|
How to create new columns derived from existing columns?#
In [1]: import pandas as pd
Data used for this tutorial:
Here one thing to note use np.log<i> or math.log where is <i> base what ever you want if you don't have any custom implementation log function.
zr should to change to 18 as per your statement formula
log(df['s_i']/df['s_18'])*log(i/18)
Here i am using similar layout dataframe, please check out this code:
import numpy as np
import pandas as pd
h = [18, 56, 98, 123, 148]
df = pd.DataFrame(
data= np.arange(1, 51).reshape(10, 5),
columns= ['s_'+str(i) for i in h]
)
# here one point should be noted that `b += sum(a)` should be in outer `for-loop`
# otherwise as `a` grows at each iteration its `all items again added` so put
# `b += sum(a) in outer for-loop`
b = 0
for i in df.index:
a = []
for zi in h:
a.append(np.log(df.loc[i,'s_'+str(zi)]/df.loc[i,'s_18'])*np.log(zi/18))
b += sum(a)
print(b)
Method-2
equation = "np.log(df.loc[i,'s_'+str(j)]/df.loc[i,'s_18'])*np.log(j/18)"
b = 0
for i in df.index:
b += sum(list(map(lambda j: eval(equation), h)))
print(b)
Air quality data
For this tutorial, air quality data about \(NO_2\) is used, made
available by OpenAQ and using the
py-openaq package.
The air_quality_no2.csv data set provides \(NO_2\) values for
the measurement stations FR04014, BETR801 and London Westminster
in respectively Paris, Antwerp and London.
To raw data
In [2]: air_quality = pd.read_csv("data/air_quality_no2.csv", index_col=0, parse_dates=True)
In [3]: air_quality.head()
Out[3]:
station_antwerp station_paris station_london
datetime
2019-05-07 02:00:00 NaN NaN 23.0
2019-05-07 03:00:00 50.5 25.0 19.0
2019-05-07 04:00:00 45.0 27.7 19.0
2019-05-07 05:00:00 NaN 50.4 16.0
2019-05-07 06:00:00 NaN 61.9 NaN
How to create new columns derived from existing columns?#
I want to express the \(NO_2\) concentration of the station in London in mg/m\(^3\).
(If we assume temperature of 25 degrees Celsius and pressure of 1013
hPa, the conversion factor is 1.882)
In [4]: air_quality["london_mg_per_cubic"] = air_quality["station_london"] * 1.882
In [5]: air_quality.head()
Out[5]:
station_antwerp ... london_mg_per_cubic
datetime ...
2019-05-07 02:00:00 NaN ... 43.286
2019-05-07 03:00:00 50.5 ... 35.758
2019-05-07 04:00:00 45.0 ... 35.758
2019-05-07 05:00:00 NaN ... 30.112
2019-05-07 06:00:00 NaN ... NaN
[5 rows x 4 columns]
To create a new column, use the [] brackets with the new column name
at the left side of the assignment.
Note
The calculation of the values is done element-wise. This
means all values in the given column are multiplied by the value 1.882
at once. You do not need to use a loop to iterate each of the rows!
I want to check the ratio of the values in Paris versus Antwerp and save the result in a new column.
In [6]: air_quality["ratio_paris_antwerp"] = (
...: air_quality["station_paris"] / air_quality["station_antwerp"]
...: )
...:
In [7]: air_quality.head()
Out[7]:
station_antwerp ... ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN ... NaN
2019-05-07 03:00:00 50.5 ... 0.495050
2019-05-07 04:00:00 45.0 ... 0.615556
2019-05-07 05:00:00 NaN ... NaN
2019-05-07 06:00:00 NaN ... NaN
[5 rows x 5 columns]
The calculation is again element-wise, so the / is applied for the
values in each row.
Also other mathematical operators (+, -, *, /,…) or
logical operators (<, >, ==,…) work element-wise. The latter was already
used in the subset data tutorial to filter
rows of a table using a conditional expression.
If you need more advanced logic, you can use arbitrary Python code via apply().
I want to rename the data columns to the corresponding station identifiers used by OpenAQ.
In [8]: air_quality_renamed = air_quality.rename(
...: columns={
...: "station_antwerp": "BETR801",
...: "station_paris": "FR04014",
...: "station_london": "London Westminster",
...: }
...: )
...:
In [9]: air_quality_renamed.head()
Out[9]:
BETR801 FR04014 ... london_mg_per_cubic ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN NaN ... 43.286 NaN
2019-05-07 03:00:00 50.5 25.0 ... 35.758 0.495050
2019-05-07 04:00:00 45.0 27.7 ... 35.758 0.615556
2019-05-07 05:00:00 NaN 50.4 ... 30.112 NaN
2019-05-07 06:00:00 NaN 61.9 ... NaN NaN
[5 rows x 5 columns]
The rename() function can be used for both row labels and column
labels. Provide a dictionary with the keys the current names and the
values the new names to update the corresponding names.
The mapping should not be restricted to fixed names only, but can be a
mapping function as well. For example, converting the column names to
lowercase letters can be done using a function as well:
In [10]: air_quality_renamed = air_quality_renamed.rename(columns=str.lower)
In [11]: air_quality_renamed.head()
Out[11]:
betr801 fr04014 ... london_mg_per_cubic ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN NaN ... 43.286 NaN
2019-05-07 03:00:00 50.5 25.0 ... 35.758 0.495050
2019-05-07 04:00:00 45.0 27.7 ... 35.758 0.615556
2019-05-07 05:00:00 NaN 50.4 ... 30.112 NaN
2019-05-07 06:00:00 NaN 61.9 ... NaN NaN
[5 rows x 5 columns]
To user guideDetails about column or row label renaming is provided in the user guide section on renaming labels.
REMEMBER
Create a new column by assigning the output to the DataFrame with a
new column name in between the [].
Operations are element-wise, no need to loop over rows.
Use rename with a dictionary or function to rename row labels or
column names.
To user guideThe user guide contains a separate section on column addition and deletion.
| 124
| 1,131
|
Using a loop to get a sum of computed values (using column math) for each dataframe entry in pandas
I have the following data frame:
For each entry of this data frame, I am looking to get the sum of log(df['s_i']/df['s_18'])*log(i/18) for i in h = [18, 56, 98, 123, 148].
I tried the following:
a = []
h = [18, 56, 98, 123, 148]
for i in df.index:
for zi in h:
a.append(log(df.loc[i,'s_'+str(zi)]/df.loc[i,'s_18'])*log(zi/18))
b = sum(a)
However, it did not work. Any idea how to solve this problem? Any help would be greatly appreciated!
|
68,056,665
|
pandas delete row unless strings from two columns in another dataframe
|
<p>I have a big dataframe like this:</p>
<pre><code>ref leftstr rightstr
12 fish 10 47 red A45
49 abc bread x10 green 12
116 19 cheese 19A blue blue 4040
118 8 fish 9fish A10 red B11
200 cheese 000 99 green 98
240 142Z cheese B blue 42 12
450 bread 94.16 0.6 red blue
...
</code></pre>
<p>And a large list like this:</p>
<pre><code>li = [
'47 red A45 bread fish 10',
'cheese 000 [purple] orangeA 99 green 98',
'bread 94.16 green 12',
'0.6 red blue abc bread x10',
'bread 19 cheese 19A 100 blue blue 4040',
'8 fish 9fish 0.6 red blue',
'bread fish 10 red A45'
...
]
</code></pre>
<p>I want to delete rows from the df if the exact strings in leftstr and rightstr are not both present in any item (but it has to be the same item) of the list. It doesn't matter whether leftstr or rightstr appear first in the list item, or if there is text in the list item as well as leftstr and rightstr.</p>
<pre><code>Desired output:
ref leftstr rightstr
12 fish 10 47 red A45
116 19 cheese 19A blue blue 4040
200 cheese 000 99 green 98
</code></pre>
<p>So, for example, ref 49 is deleted because, although leftstr and rightstr are both in a list item, they are not in the same list item.</p>
| 68,057,095
| 2021-06-20T14:10:04.567000
| 4
| null | 3
| 74
|
python|pandas
|
<p>You could try this:</p>
<pre class="lang-py prettyprint-override"><code># Identify rows to keep
rows_to_keep = []
for item in li:
for i, row, in df.iterrows():
if row["leftstr"] in item and row["rightstr"] in item:
rows_to_keep.append(row["ref"])
# Select rows
filtered_df = df[df["ref"].isin(rows_to_keep)]
print(filtered_df)
# Outputs
ref leftstr rightstr
0 12 fish 10 47 red A45
2 116 19 cheese 19A blue blue 4040
4 200 cheese 000 99 green 98
</code></pre>
| 2021-06-20T14:58:42.557000
| 1
|
https://pandas.pydata.org/docs/dev/user_guide/merging.html
|
Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
You could try this:
# Identify rows to keep
rows_to_keep = []
for item in li:
for i, row, in df.iterrows():
if row["leftstr"] in item and row["rightstr"] in item:
rows_to_keep.append(row["ref"])
# Select rows
filtered_df = df[df["ref"].isin(rows_to_keep)]
print(filtered_df)
# Outputs
ref leftstr rightstr
0 12 fish 10 47 red A45
2 116 19 cheese 19A blue blue 4040
4 200 cheese 000 99 green 98
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 843
| 1,306
|
pandas delete row unless strings from two columns in another dataframe
I have a big dataframe like this:
ref leftstr rightstr
12 fish 10 47 red A45
49 abc bread x10 green 12
116 19 cheese 19A blue blue 4040
118 8 fish 9fish A10 red B11
200 cheese 000 99 green 98
240 142Z cheese B blue 42 12
450 bread 94.16 0.6 red blue
...
And a large list like this:
li = [
'47 red A45 bread fish 10',
'cheese 000 [purple] orangeA 99 green 98',
'bread 94.16 green 12',
'0.6 red blue abc bread x10',
'bread 19 cheese 19A 100 blue blue 4040',
'8 fish 9fish 0.6 red blue',
'bread fish 10 red A45'
...
]
I want to delete rows from the df if the exact strings in leftstr and rightstr are not both present in any item (but it has to be the same item) of the list. It doesn't matter whether leftstr or rightstr appear first in the list item, or if there is text in the list item as well as leftstr and rightstr.
Desired output:
ref leftstr rightstr
12 fish 10 47 red A45
116 19 cheese 19A blue blue 4040
200 cheese 000 99 green 98
So, for example, ref 49 is deleted because, although leftstr and rightstr are both in a list item, they are not in the same list item.
|
70,037,715
|
How do I use pandas to open file and process each row for that file?
|
<p>So I have a dataframe containing image filenames and multiple (X, Y) coordinates for each image, like this::</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">file</th>
<th style="text-align: right;">x</th>
<th style="text-align: right;">y</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">File1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">File1</td>
<td style="text-align: right;">103</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: left;">File2</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: left;">File2</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">9</td>
</tr>
</tbody>
</table>
</div>
<p>I want to open each file and draw the points points on it.</p>
<p>I want to open each file in the file column once only. (I know how to open and draw on the file, I am asking how to loop through the dataframe correctly). Then process each line for that file.</p>
<p>I know I can use perhaps <code>df['file'].unique()</code> but don't know where to go from that.</p>
<p>I can probably work this out naively but is there a clever pandas way of doing it? I imagine it may have been asked but I can't figure out how to search for it.</p>
| 70,038,479
| 2021-11-19T16:06:30.137000
| 1
| null | 0
| 75
|
python|pandas
|
<p>You can use <code>groupby</code> to group all the rows into separate dataframes for each unique value of the <code>file</code> columns, loop over those dataframes, and use <code>iterrows</code> on each dataframe to loop over each row.</p>
<pre class="lang-py prettyprint-override"><code>for file, group_df in df.groupby('file'):
print(f'{file}:')
for idx, row in group_df.iterrows():
x, y = row[['x', 'y']]
print(f' x={x} y={y}')
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>File1:
x=1 y=2
x=103 y=6
File2:
x=6 y=3
x=4 y=9
</code></pre>
| 2021-11-19T17:10:19.403000
| 1
|
https://pandas.pydata.org/docs/user_guide/io.html
|
IO tools (text, CSV, HDF5, …)#
IO tools (text, CSV, HDF5, …)#
You can use groupby to group all the rows into separate dataframes for each unique value of the file columns, loop over those dataframes, and use iterrows on each dataframe to loop over each row.
for file, group_df in df.groupby('file'):
print(f'{file}:')
for idx, row in group_df.iterrows():
x, y = row[['x', 'y']]
print(f' x={x} y={y}')
Output:
File1:
x=1 y=2
x=103 y=6
File2:
x=6 y=3
x=4 y=9
The pandas I/O API is a set of top level reader functions accessed like
pandas.read_csv() that generally return a pandas object. The corresponding
writer functions are object methods that are accessed like
DataFrame.to_csv(). Below is a table containing available readers and
writers.
Format Type
Data Description
Reader
Writer
text
CSV
read_csv
to_csv
text
Fixed-Width Text File
read_fwf
text
JSON
read_json
to_json
text
HTML
read_html
to_html
text
LaTeX
Styler.to_latex
text
XML
read_xml
to_xml
text
Local clipboard
read_clipboard
to_clipboard
binary
MS Excel
read_excel
to_excel
binary
OpenDocument
read_excel
binary
HDF5 Format
read_hdf
to_hdf
binary
Feather Format
read_feather
to_feather
binary
Parquet Format
read_parquet
to_parquet
binary
ORC Format
read_orc
to_orc
binary
Stata
read_stata
to_stata
binary
SAS
read_sas
binary
SPSS
read_spss
binary
Python Pickle Format
read_pickle
to_pickle
SQL
SQL
read_sql
to_sql
SQL
Google BigQuery
read_gbq
to_gbq
Here is an informal performance comparison for some of these IO methods.
Note
For examples that use the StringIO class, make sure you import it
with from io import StringIO for Python 3.
CSV & text files#
The workhorse function for reading text files (a.k.a. flat files) is
read_csv(). See the cookbook for some advanced strategies.
Parsing options#
read_csv() accepts the following common arguments:
Basic#
filepath_or_buffervariousEither a path to a file (a str, pathlib.Path,
or py:py._path.local.LocalPath), URL (including http, ftp, and S3
locations), or any object with a read() method (such as an open file or
StringIO).
sepstr, defaults to ',' for read_csv(), \t for read_table()Delimiter to use. If sep is None, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will be
used and automatically detect the separator by Python’s builtin sniffer tool,
csv.Sniffer. In addition, separators longer than 1 character and
different from '\s+' will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: '\\r\\t'.
delimiterstr, default NoneAlternative argument name for sep.
delim_whitespaceboolean, default FalseSpecifies whether or not whitespace (e.g. ' ' or '\t')
will be used as the delimiter. Equivalent to setting sep='\s+'.
If this option is set to True, nothing should be passed in for the
delimiter parameter.
Column and index locations and names#
headerint or list of ints, default 'infer'Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first line of the file, if column names are
passed explicitly then the behavior is identical to
header=None. Explicitly pass header=0 to be able to replace
existing names.
The header can be a list of ints that specify row locations
for a MultiIndex on the columns e.g. [0,1,3]. Intervening rows
that are not specified will be skipped (e.g. 2 in this example is
skipped). Note that this parameter ignores commented lines and empty
lines if skip_blank_lines=True, so header=0 denotes the first
line of data rather than the first line of the file.
namesarray-like, default NoneList of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list are not allowed.
index_colint, str, sequence of int / str, or False, optional, default NoneColumn(s) to use as the row labels of the DataFrame, either given as
string name or column index. If a sequence of int / str is given, a
MultiIndex is used.
Note
index_col=False can be used to force pandas to not use the first
column as the index, e.g. when you have a malformed file with delimiters at
the end of each line.
The default value of None instructs pandas to guess. If the number of
fields in the column header row is equal to the number of fields in the body
of the data file, then a default index is used. If it is larger, then
the first columns are used as index so that the remaining number of fields in
the body are equal to the number of fields in the header.
The first row after the header is used to determine the number of columns,
which will go into the index. If the subsequent rows contain less columns
than the first row, they are filled with NaN.
This can be avoided through usecols. This ensures that the columns are
taken as is and the trailing data are ignored.
usecolslist-like or callable, default NoneReturn a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in names or
inferred from the document header row(s). If names are given, the document
header row(s) are not taken into account. For example, a valid list-like
usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To
instantiate a DataFrame from data with element order preserved use
pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns
in ['foo', 'bar'] order or
pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for
['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column names,
returning names where the callable function evaluates to True:
In [1]: import pandas as pd
In [2]: from io import StringIO
In [3]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [4]: pd.read_csv(StringIO(data))
Out[4]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [5]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["COL1", "COL3"])
Out[5]:
col1 col3
0 a 1
1 a 2
2 c 3
Using this parameter results in much faster parsing time and lower memory usage
when using the c engine. The Python engine loads the data first before deciding
which columns to drop.
squeezeboolean, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to {func_name} to squeeze
the data.
prefixstr, default NonePrefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
In [6]: data = "col1,col2,col3\na,b,1"
In [7]: df = pd.read_csv(StringIO(data))
In [8]: df.columns = [f"pre_{col}" for col in df.columns]
In [9]: df
Out[9]:
pre_col1 pre_col2 pre_col3
0 a b 1
mangle_dupe_colsboolean, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’…’X.N’, rather than ‘X’…’X’.
Passing in False will cause data to be overwritten if there are duplicate
names in the columns.
Deprecated since version 1.5.0: The argument was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
General parsing configuration#
dtypeType name or dict of column -> type, default NoneData type for data or columns. E.g. {'a': np.float64, 'b': np.int32, 'c': 'Int64'}
Use str or object together with suitable na_values settings to preserve
and not interpret dtype. If converters are specified, they will be applied INSTEAD
of dtype conversion.
New in version 1.5.0: Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.
engine{'c', 'python', 'pyarrow'}Parser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.
New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features
are unsupported, or may not work correctly, with this engine.
convertersdict, default NoneDict of functions for converting values in certain columns. Keys can either be
integers or column labels.
true_valueslist, default NoneValues to consider as True.
false_valueslist, default NoneValues to consider as False.
skipinitialspaceboolean, default FalseSkip spaces after delimiter.
skiprowslist-like or integer, default NoneLine numbers to skip (0-indexed) or number of lines to skip (int) at the start
of the file.
If callable, the callable function will be evaluated against the row
indices, returning True if the row should be skipped and False otherwise:
In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [11]: pd.read_csv(StringIO(data))
Out[11]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]:
col1 col2 col3
0 a b 2
skipfooterint, default 0Number of lines at bottom of file to skip (unsupported with engine=’c’).
nrowsint, default NoneNumber of rows of file to read. Useful for reading pieces of large files.
low_memoryboolean, default TrueInternally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
Note that the entire file is read into a single DataFrame regardless,
use the chunksize or iterator parameter to return the data in chunks.
(Only valid with C parser)
memory_mapboolean, default FalseIf a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this
option can improve performance because there is no longer any I/O overhead.
NA and missing data handling#
na_valuesscalar, str, list-like, or dict, default NoneAdditional strings to recognize as NA/NaN. If dict passed, specific per-column
NA values. See na values const below
for a list of the values interpreted as NaN by default.
keep_default_naboolean, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterboolean, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verboseboolean, default FalseIndicate number of NA values placed in non-numeric columns.
skip_blank_linesboolean, default TrueIf True, skip over blank lines rather than interpreting as NaN values.
Datetime handling#
parse_datesboolean or list of ints or names or list of lists or dict, default False.
If True -> try parsing the index.
If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date
column.
If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column.
If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’.
Note
A fast-path exists for iso8601-formatted dates.
infer_datetime_formatboolean, default FalseIf True and parse_dates is enabled for a column, attempt to infer the
datetime format to speed up the processing.
keep_date_colboolean, default FalseIf True and parse_dates specifies combining multiple columns then keep the
original columns.
date_parserfunction, default NoneFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays (as
defined by parse_dates) as arguments; 2) concatenate (row-wise) the string
values from the columns defined by parse_dates into a single array and pass
that; and 3) call date_parser once for each row using one or more strings
(corresponding to the columns defined by parse_dates) as arguments.
dayfirstboolean, default FalseDD/MM format dates, international and European format.
cache_datesboolean, default TrueIf True, use a cache of unique, converted dates to apply the datetime
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
New in version 0.25.0.
Iteration#
iteratorboolean, default FalseReturn TextFileReader object for iteration or getting chunks with
get_chunk().
chunksizeint, default NoneReturn TextFileReader object for iteration. See iterating and chunking below.
Quoting, compression, and file format#
compression{'infer', 'gzip', 'bz2', 'zip', 'xz', 'zstd', None, dict}, default 'infer'For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip,
bz2, zip, xz, or zstandard if filepath_or_buffer is path-like ending in ‘.gz’, ‘.bz2’,
‘.zip’, ‘.xz’, ‘.zst’, respectively, and no decompression otherwise. If using ‘zip’,
the ZIP file must contain only one data file to be read in.
Set to None for no decompression. Can also be a dict with key 'method'
set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are
forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor.
As an example, the following could be passed for faster compression and to
create a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
Changed in version 1.1.0: dict option extended to support gzip and bz2.
Changed in version 1.2.0: Previous versions forwarded dict entries for ‘gzip’ to gzip.open.
thousandsstr, default NoneThousands separator.
decimalstr, default '.'Character to recognize as decimal point. E.g. use ',' for European data.
float_precisionstring, default NoneSpecifies which converter the C engine should use for floating-point values.
The options are None for the ordinary converter, high for the
high-precision converter, and round_trip for the round-trip converter.
lineterminatorstr (length 1), default NoneCharacter to break file into lines. Only valid with C parser.
quotecharstr (length 1)The character used to denote the start and end of a quoted item. Quoted items
can include the delimiter and it will be ignored.
quotingint or csv.QUOTE_* instance, default 0Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
doublequoteboolean, default TrueWhen quotechar is specified and quoting is not QUOTE_NONE,
indicate whether or not to interpret two consecutive quotechar elements
inside a field as a single quotechar element.
escapecharstr (length 1), default NoneOne-character string used to escape delimiter when quoting is QUOTE_NONE.
commentstr, default NoneIndicates remainder of line should not be parsed. If found at the beginning of
a line, the line will be ignored altogether. This parameter must be a single
character. Like empty lines (as long as skip_blank_lines=True), fully
commented lines are ignored by the parameter header but not by skiprows.
For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with
header=0 will result in ‘a,b,c’ being treated as the header.
encodingstr, default NoneEncoding to use for UTF when reading/writing (e.g. 'utf-8'). List of
Python standard encodings.
dialectstr or csv.Dialect instance, default NoneIf provided, this parameter will override values (default or not) for the
following parameters: delimiter, doublequote, escapechar,
skipinitialspace, quotechar, and quoting. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
Error handling#
error_bad_linesboolean, optional, default NoneLines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be
returned. If False, then these “bad lines” will dropped from the
DataFrame that is returned. See bad lines
below.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
warn_bad_linesboolean, optional, default NoneIf error_bad_lines is False, and warn_bad_lines is True, a warning for
each “bad line” will be output.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
on_bad_lines(‘error’, ‘warn’, ‘skip’), default ‘error’Specifies what to do upon encountering a bad line (a line with too many fields).
Allowed values are :
‘error’, raise an ParserError when a bad line is encountered.
‘warn’, print a warning when a bad line is encountered and skip that line.
‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0.
Specifying column data types#
You can indicate the data type for the whole DataFrame or individual
columns:
In [13]: import numpy as np
In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11
In [16]: df = pd.read_csv(StringIO(data), dtype=object)
In [17]: df
Out[17]:
a b c d
0 1 2 3 4
1 5 6 7 8
2 9 10 11 NaN
In [18]: df["a"][0]
Out[18]: '1'
In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
In [20]: df.dtypes
Out[20]:
a int64
b object
c float64
d Int64
dtype: object
Fortunately, pandas offers more than one way to ensure that your column(s)
contain only one dtype. If you’re unfamiliar with these concepts, you can
see here to learn more about dtypes, and
here to learn more about object conversion in
pandas.
For instance, you can use the converters argument
of read_csv():
In [21]: data = "col_1\n1\n2\n'A'\n4.22"
In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
In [23]: df
Out[23]:
col_1
0 1
1 2
2 'A'
3 4.22
In [24]: df["col_1"].apply(type).value_counts()
Out[24]:
<class 'str'> 4
Name: col_1, dtype: int64
Or you can use the to_numeric() function to coerce the
dtypes after reading in the data,
In [25]: df2 = pd.read_csv(StringIO(data))
In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
In [27]: df2
Out[27]:
col_1
0 1.00
1 2.00
2 NaN
3 4.22
In [28]: df2["col_1"].apply(type).value_counts()
Out[28]:
<class 'float'> 4
Name: col_1, dtype: int64
which will convert all valid parsing to floats, leaving the invalid parsing
as NaN.
Ultimately, how you deal with reading in columns containing mixed dtypes
depends on your specific needs. In the case above, if you wanted to NaN out
the data anomalies, then to_numeric() is probably your best option.
However, if you wanted for all the data to be coerced, no matter the type, then
using the converters argument of read_csv() would certainly be
worth trying.
Note
In some cases, reading in abnormal data with columns containing mixed dtypes
will result in an inconsistent dataset. If you rely on pandas to infer the
dtypes of your columns, the parsing engine will go and infer the dtypes for
different chunks of the data, rather than the whole dataset at once. Consequently,
you can end up with column(s) with mixed dtypes. For example,
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
In [30]: df = pd.DataFrame({"col_1": col_1})
In [31]: df.to_csv("foo.csv")
In [32]: mixed_df = pd.read_csv("foo.csv")
In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]:
<class 'int'> 737858
<class 'str'> 262144
Name: col_1, dtype: int64
In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
will result with mixed_df containing an int dtype for certain chunks
of the column, and str for others due to the mixed dtypes from the
data that was read in. It is important to note that the overall column will be
marked with a dtype of object, which is used for columns with mixed dtypes.
Specifying categorical dtype#
Categorical columns can be parsed directly by specifying dtype='category' or
dtype=CategoricalDtype(categories, ordered).
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [36]: pd.read_csv(StringIO(data))
Out[36]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]:
col1 object
col2 object
col3 int64
dtype: object
In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]:
col1 category
col2 category
col3 category
dtype: object
Individual columns can be parsed as a Categorical using a dict
specification:
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]:
col1 category
col2 object
col3 int64
dtype: object
Specifying dtype='category' will result in an unordered Categorical
whose categories are the unique values observed in the data. For more
control on the categories and order, create a
CategoricalDtype ahead of time, and pass that for
that column’s dtype.
In [40]: from pandas.api.types import CategoricalDtype
In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]:
col1 category
col2 object
col3 int64
dtype: object
When using dtype=CategoricalDtype, “unexpected” values outside of
dtype.categories are treated as missing values.
In [43]: dtype = CategoricalDtype(["a", "b", "d"]) # No 'c'
In [44]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).col1
Out[44]:
0 a
1 a
2 NaN
Name: col1, dtype: category
Categories (3, object): ['a', 'b', 'd']
This matches the behavior of Categorical.set_categories().
Note
With dtype='category', the resulting categories will always be parsed
as strings (object dtype). If the categories are numeric they can be
converted using the to_numeric() function, or as appropriate, another
converter such as to_datetime().
When dtype is a CategoricalDtype with homogeneous categories (
all numeric, all datetimes, etc.), the conversion is done automatically.
In [45]: df = pd.read_csv(StringIO(data), dtype="category")
In [46]: df.dtypes
Out[46]:
col1 category
col2 category
col3 category
dtype: object
In [47]: df["col3"]
Out[47]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): ['1', '2', '3']
In [48]: new_categories = pd.to_numeric(df["col3"].cat.categories)
In [49]: df["col3"] = df["col3"].cat.rename_categories(new_categories)
In [50]: df["col3"]
Out[50]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
Naming and using columns#
Handling column names#
A file may or may not have a header row. pandas assumes the first row should be
used as the column names:
In [51]: data = "a,b,c\n1,2,3\n4,5,6\n7,8,9"
In [52]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [53]: pd.read_csv(StringIO(data))
Out[53]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
By specifying the names argument in conjunction with header you can
indicate other names to use and whether or not to throw away the header row (if
any):
In [54]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [55]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=0)
Out[55]:
foo bar baz
0 1 2 3
1 4 5 6
2 7 8 9
In [56]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=None)
Out[56]:
foo bar baz
0 a b c
1 1 2 3
2 4 5 6
3 7 8 9
If the header is in a row other than the first, pass the row number to
header. This will skip the preceding rows:
In [57]: data = "skip this skip it\na,b,c\n1,2,3\n4,5,6\n7,8,9"
In [58]: pd.read_csv(StringIO(data), header=1)
Out[58]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
Note
Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first non-blank line of the file, if column
names are passed explicitly then the behavior is identical to
header=None.
Duplicate names parsing#
Deprecated since version 1.5.0: mangle_dupe_cols was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
If the file or header contains duplicate names, pandas will by default
distinguish between them so as to prevent overwriting data:
In [59]: data = "a,b,a\n0,1,2\n3,4,5"
In [60]: pd.read_csv(StringIO(data))
Out[60]:
a b a.1
0 0 1 2
1 3 4 5
There is no more duplicate data because mangle_dupe_cols=True by default,
which modifies a series of duplicate columns ‘X’, …, ‘X’ to become
‘X’, ‘X.1’, …, ‘X.N’.
Filtering columns (usecols)#
The usecols argument allows you to select any subset of the columns in a
file, either using the column names, position numbers or a callable:
In [61]: data = "a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz"
In [62]: pd.read_csv(StringIO(data))
Out[62]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz
In [63]: pd.read_csv(StringIO(data), usecols=["b", "d"])
Out[63]:
b d
0 2 foo
1 5 bar
2 8 baz
In [64]: pd.read_csv(StringIO(data), usecols=[0, 2, 3])
Out[64]:
a c d
0 1 3 foo
1 4 6 bar
2 7 9 baz
In [65]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["A", "C"])
Out[65]:
a c
0 1 3
1 4 6
2 7 9
The usecols argument can also be used to specify which columns not to
use in the final result:
In [66]: pd.read_csv(StringIO(data), usecols=lambda x: x not in ["a", "c"])
Out[66]:
b d
0 2 foo
1 5 bar
2 8 baz
In this case, the callable is specifying that we exclude the “a” and “c”
columns from the output.
Comments and empty lines#
Ignoring line comments and empty lines#
If the comment parameter is specified, then completely commented lines will
be ignored. By default, completely blank lines will be ignored as well.
In [67]: data = "\na,b,c\n \n# commented line\n1,2,3\n\n4,5,6"
In [68]: print(data)
a,b,c
# commented line
1,2,3
4,5,6
In [69]: pd.read_csv(StringIO(data), comment="#")
Out[69]:
a b c
0 1 2 3
1 4 5 6
If skip_blank_lines=False, then read_csv will not ignore blank lines:
In [70]: data = "a,b,c\n\n1,2,3\n\n\n4,5,6"
In [71]: pd.read_csv(StringIO(data), skip_blank_lines=False)
Out[71]:
a b c
0 NaN NaN NaN
1 1.0 2.0 3.0
2 NaN NaN NaN
3 NaN NaN NaN
4 4.0 5.0 6.0
Warning
The presence of ignored lines might create ambiguities involving line numbers;
the parameter header uses row numbers (ignoring commented/empty
lines), while skiprows uses line numbers (including commented/empty lines):
In [72]: data = "#comment\na,b,c\nA,B,C\n1,2,3"
In [73]: pd.read_csv(StringIO(data), comment="#", header=1)
Out[73]:
A B C
0 1 2 3
In [74]: data = "A,B,C\n#comment\na,b,c\n1,2,3"
In [75]: pd.read_csv(StringIO(data), comment="#", skiprows=2)
Out[75]:
a b c
0 1 2 3
If both header and skiprows are specified, header will be
relative to the end of skiprows. For example:
In [76]: data = (
....: "# empty\n"
....: "# second empty line\n"
....: "# third emptyline\n"
....: "X,Y,Z\n"
....: "1,2,3\n"
....: "A,B,C\n"
....: "1,2.,4.\n"
....: "5.,NaN,10.0\n"
....: )
....:
In [77]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
1,2,3
A,B,C
1,2.,4.
5.,NaN,10.0
In [78]: pd.read_csv(StringIO(data), comment="#", skiprows=4, header=1)
Out[78]:
A B C
0 1.0 2.0 4.0
1 5.0 NaN 10.0
Comments#
Sometimes comments or meta data may be included in a file:
In [79]: print(open("tmp.csv").read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome
By default, the parser includes the comments in the output:
In [80]: df = pd.read_csv("tmp.csv")
In [81]: df
Out[81]:
ID level category
0 Patient1 123000 x # really unpleasant
1 Patient2 23000 y # wouldn't take his medicine
2 Patient3 1234018 z # awesome
We can suppress the comments using the comment keyword:
In [82]: df = pd.read_csv("tmp.csv", comment="#")
In [83]: df
Out[83]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
Dealing with Unicode data#
The encoding argument should be used for encoded unicode data, which will
result in byte strings being decoded to unicode in the result:
In [84]: from io import BytesIO
In [85]: data = b"word,length\n" b"Tr\xc3\xa4umen,7\n" b"Gr\xc3\xbc\xc3\x9fe,5"
In [86]: data = data.decode("utf8").encode("latin-1")
In [87]: df = pd.read_csv(BytesIO(data), encoding="latin-1")
In [88]: df
Out[88]:
word length
0 Träumen 7
1 Grüße 5
In [89]: df["word"][1]
Out[89]: 'Grüße'
Some formats which encode all characters as multiple bytes, like UTF-16, won’t
parse correctly at all without specifying the encoding. Full list of Python
standard encodings.
Index columns and trailing delimiters#
If a file has one more column of data than the number of column names, the
first column will be used as the DataFrame’s row names:
In [90]: data = "a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [91]: pd.read_csv(StringIO(data))
Out[91]:
a b c
4 apple bat 5.7
8 orange cow 10.0
In [92]: data = "index,a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [93]: pd.read_csv(StringIO(data), index_col=0)
Out[93]:
a b c
index
4 apple bat 5.7
8 orange cow 10.0
Ordinarily, you can achieve this behavior using the index_col option.
There are some exception cases when a file has been prepared with delimiters at
the end of each data line, confusing the parser. To explicitly disable the
index column inference and discard the last column, pass index_col=False:
In [94]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [95]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [96]: pd.read_csv(StringIO(data))
Out[96]:
a b c
4 apple bat NaN
8 orange cow NaN
In [97]: pd.read_csv(StringIO(data), index_col=False)
Out[97]:
a b c
0 4 apple bat
1 8 orange cow
If a subset of data is being parsed using the usecols option, the
index_col specification is based on that subset, not the original data.
In [98]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [99]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [100]: pd.read_csv(StringIO(data), usecols=["b", "c"])
Out[100]:
b c
4 bat NaN
8 cow NaN
In [101]: pd.read_csv(StringIO(data), usecols=["b", "c"], index_col=0)
Out[101]:
b c
4 bat NaN
8 cow NaN
Date Handling#
Specifying date columns#
To better facilitate working with datetime data, read_csv()
uses the keyword arguments parse_dates and date_parser
to allow users to specify a variety of columns and date/time formats to turn the
input text data into datetime objects.
The simplest case is to just pass in parse_dates=True:
In [102]: with open("foo.csv", mode="w") as f:
.....: f.write("date,A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5")
.....:
# Use a column as an index, and parse it as dates.
In [103]: df = pd.read_csv("foo.csv", index_col=0, parse_dates=True)
In [104]: df
Out[104]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
# These are Python datetime objects
In [105]: df.index
Out[105]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', name='date', freq=None)
It is often the case that we may want to store date and time data separately,
or store various date fields separately. the parse_dates keyword can be
used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates, the resulting date
columns will be prepended to the output (so as to not affect the existing column
order) and the new column names will be the concatenation of the component
column names:
In [106]: data = (
.....: "KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
.....: "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
.....: "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n"
.....: "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n"
.....: "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n"
.....: "KORD,19990127, 23:00:00, 22:56:00, -0.5900"
.....: )
.....:
In [107]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [108]: df = pd.read_csv("tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]])
In [109]: df
Out[109]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
By default the parser removes the component date columns, but you can choose
to retain them via the keep_date_col keyword:
In [110]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]], keep_date_col=True
.....: )
.....:
In [111]: df
Out[111]:
1_2 1_3 0 ... 2 3 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD ... 19:00:00 18:56:00 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD ... 20:00:00 19:56:00 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD ... 21:00:00 20:56:00 -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD ... 21:00:00 21:18:00 -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD ... 22:00:00 21:56:00 -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD ... 23:00:00 22:56:00 -0.59
[6 rows x 7 columns]
Note that if you wish to combine multiple columns into a single date column, a
nested list must be used. In other words, parse_dates=[1, 2] indicates that
the second and third columns should each be parsed as separate date columns
while parse_dates=[[1, 2]] means the two columns should be parsed into a
single column.
You can also use a dict to specify custom name columns:
In [112]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [113]: df = pd.read_csv("tmp.csv", header=None, parse_dates=date_spec)
In [114]: df
Out[114]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
It is important to remember that if multiple text columns are to be parsed into
a single date column, then a new column is prepended to the data. The index_col
specification is based off of this new set of columns rather than the original
data columns:
In [115]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [116]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, index_col=0
.....: ) # index is the nominal column
.....:
In [117]: df
Out[117]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Note
If a column or index contains an unparsable date, the entire column or
index will be returned unaltered as an object data type. For non-standard
datetime parsing, use to_datetime() after pd.read_csv.
Note
read_csv has a fast_path for parsing datetime strings in iso8601 format,
e.g “2000-01-01T00:01:02+00:00” and similar variations. If you can arrange
for your data to store datetimes in this format, load times will be
significantly faster, ~20x has been observed.
Date parsing functions#
Finally, the parser allows you to specify a custom date_parser function to
take full advantage of the flexibility of the date parsing API:
In [118]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, date_parser=pd.to_datetime
.....: )
.....:
In [119]: df
Out[119]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
pandas will try to call the date_parser function in three different ways. If
an exception is raised, the next one is tried:
date_parser is first called with one or more arrays as arguments,
as defined using parse_dates (e.g., date_parser(['2013', '2013'], ['1', '2'])).
If #1 fails, date_parser is called with all the columns
concatenated row-wise into a single array (e.g., date_parser(['2013 1', '2013 2'])).
Note that performance-wise, you should try these methods of parsing dates in order:
Try to infer the format using infer_datetime_format=True (see section below).
If you know the format, use pd.to_datetime():
date_parser=lambda x: pd.to_datetime(x, format=...).
If you have a really non-standard format, use a custom date_parser function.
For optimal performance, this should be vectorized, i.e., it should accept arrays
as arguments.
Parsing a CSV with mixed timezones#
pandas cannot natively represent a column or index with mixed timezones. If your CSV
file contains columns with a mixture of timezones, the default result will be
an object-dtype column with strings, even with parse_dates.
In [120]: content = """\
.....: a
.....: 2000-01-01T00:00:00+05:00
.....: 2000-01-01T00:00:00+06:00"""
.....:
In [121]: df = pd.read_csv(StringIO(content), parse_dates=["a"])
In [122]: df["a"]
Out[122]:
0 2000-01-01 00:00:00+05:00
1 2000-01-01 00:00:00+06:00
Name: a, dtype: object
To parse the mixed-timezone values as a datetime column, pass a partially-applied
to_datetime() with utc=True as the date_parser.
In [123]: df = pd.read_csv(
.....: StringIO(content),
.....: parse_dates=["a"],
.....: date_parser=lambda col: pd.to_datetime(col, utc=True),
.....: )
.....:
In [124]: df["a"]
Out[124]:
0 1999-12-31 19:00:00+00:00
1 1999-12-31 18:00:00+00:00
Name: a, dtype: datetime64[ns, UTC]
Inferring datetime format#
If you have parse_dates enabled for some or all of your columns, and your
datetime strings are all formatted the same way, you may get a large speed
up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means
of parsing the strings. 5-10x parsing speeds have been observed. pandas
will fallback to the usual parsing if either the format cannot be guessed
or the format that was guessed cannot properly parse the entire column
of strings. So in general, infer_datetime_format should not have any
negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All
representing December 30th, 2011 at 00:00:00):
“20111230”
“2011/12/30”
“20111230 00:00:00”
“12/30/2011 00:00:00”
“30/Dec/2011 00:00:00”
“30/December/2011 00:00:00”
Note that infer_datetime_format is sensitive to dayfirst. With
dayfirst=True, it will guess “01/12/2011” to be December 1st. With
dayfirst=False (default) it will guess “01/12/2011” to be January 12th.
# Try to infer the format for the index column
In [125]: df = pd.read_csv(
.....: "foo.csv",
.....: index_col=0,
.....: parse_dates=True,
.....: infer_datetime_format=True,
.....: )
.....:
In [126]: df
Out[126]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
International date formats#
While US date formats tend to be MM/DD/YYYY, many international formats use
DD/MM/YYYY instead. For convenience, a dayfirst keyword is provided:
In [127]: data = "date,value,cat\n1/6/2000,5,a\n2/6/2000,10,b\n3/6/2000,15,c"
In [128]: print(data)
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c
In [129]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [130]: pd.read_csv("tmp.csv", parse_dates=[0])
Out[130]:
date value cat
0 2000-01-06 5 a
1 2000-02-06 10 b
2 2000-03-06 15 c
In [131]: pd.read_csv("tmp.csv", dayfirst=True, parse_dates=[0])
Out[131]:
date value cat
0 2000-06-01 5 a
1 2000-06-02 10 b
2 2000-06-03 15 c
Writing CSVs to binary file objects#
New in version 1.2.0.
df.to_csv(..., mode="wb") allows writing a CSV to a file object
opened binary mode. In most cases, it is not necessary to specify
mode as Pandas will auto-detect whether the file object is
opened in text or binary mode.
In [132]: import io
In [133]: data = pd.DataFrame([0, 1, 2])
In [134]: buffer = io.BytesIO()
In [135]: data.to_csv(buffer, encoding="utf-8", compression="gzip")
Specifying method for floating-point conversion#
The parameter float_precision can be specified in order to use
a specific floating-point converter during parsing with the C engine.
The options are the ordinary converter, the high-precision converter, and
the round-trip converter (which is guaranteed to round-trip values after
writing to a file). For example:
In [136]: val = "0.3066101993807095471566981359501369297504425048828125"
In [137]: data = "a,b,c\n1,2,{0}".format(val)
In [138]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision=None,
.....: )["c"][0] - float(val)
.....: )
.....:
Out[138]: 5.551115123125783e-17
In [139]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision="high",
.....: )["c"][0] - float(val)
.....: )
.....:
Out[139]: 5.551115123125783e-17
In [140]: abs(
.....: pd.read_csv(StringIO(data), engine="c", float_precision="round_trip")["c"][0]
.....: - float(val)
.....: )
.....:
Out[140]: 0.0
Thousand separators#
For large numbers that have been written with a thousands separator, you can
set the thousands keyword to a string of length 1 so that integers will be parsed
correctly:
By default, numbers with a thousands separator will be parsed as strings:
In [141]: data = (
.....: "ID|level|category\n"
.....: "Patient1|123,000|x\n"
.....: "Patient2|23,000|y\n"
.....: "Patient3|1,234,018|z"
.....: )
.....:
In [142]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [143]: df = pd.read_csv("tmp.csv", sep="|")
In [144]: df
Out[144]:
ID level category
0 Patient1 123,000 x
1 Patient2 23,000 y
2 Patient3 1,234,018 z
In [145]: df.level.dtype
Out[145]: dtype('O')
The thousands keyword allows integers to be parsed correctly:
In [146]: df = pd.read_csv("tmp.csv", sep="|", thousands=",")
In [147]: df
Out[147]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
In [148]: df.level.dtype
Out[148]: dtype('int64')
NA values#
To control which values are parsed as missing values (which are signified by
NaN), specify a string in na_values. If you specify a list of strings,
then all values in it are considered to be missing values. If you specify a
number (a float, like 5.0 or an integer like 5), the
corresponding equivalent values will also imply a missing value (in this case
effectively [5.0, 5] are recognized as NaN).
To completely override the default values that are recognized as missing, specify keep_default_na=False.
The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A',
'n/a', 'NA', '<NA>', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', ''].
Let us consider some examples:
pd.read_csv("path_to_file.csv", na_values=[5])
In the example above 5 and 5.0 will be recognized as NaN, in
addition to the defaults. A string will first be interpreted as a numerical
5, then as a NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=[""])
Above, only an empty field will be recognized as NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=["NA", "0"])
Above, both NA and 0 as strings are NaN.
pd.read_csv("path_to_file.csv", na_values=["Nope"])
The default values, in addition to the string "Nope" are recognized as
NaN.
Infinity#
inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity).
These will ignore the case of the value, meaning Inf, will also be parsed as np.inf.
Returning Series#
Using the squeeze keyword, the parser will return output with a single column
as a Series:
Deprecated since version 1.4.0: Users should append .squeeze("columns") to the DataFrame returned by
read_csv instead.
In [149]: data = "level\nPatient1,123000\nPatient2,23000\nPatient3,1234018"
In [150]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [151]: print(open("tmp.csv").read())
level
Patient1,123000
Patient2,23000
Patient3,1234018
In [152]: output = pd.read_csv("tmp.csv", squeeze=True)
In [153]: output
Out[153]:
Patient1 123000
Patient2 23000
Patient3 1234018
Name: level, dtype: int64
In [154]: type(output)
Out[154]: pandas.core.series.Series
Boolean values#
The common values True, False, TRUE, and FALSE are all
recognized as boolean. Occasionally you might want to recognize other values
as being boolean. To do this, use the true_values and false_values
options as follows:
In [155]: data = "a,b,c\n1,Yes,2\n3,No,4"
In [156]: print(data)
a,b,c
1,Yes,2
3,No,4
In [157]: pd.read_csv(StringIO(data))
Out[157]:
a b c
0 1 Yes 2
1 3 No 4
In [158]: pd.read_csv(StringIO(data), true_values=["Yes"], false_values=["No"])
Out[158]:
a b c
0 1 True 2
1 3 False 4
Handling “bad” lines#
Some files may have malformed lines with too few fields or too many. Lines with
too few fields will have NA values filled in the trailing fields. Lines with
too many fields will raise an error by default:
In [159]: data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
In [160]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
Cell In[160], line 1
----> 1 pd.read_csv(StringIO(data))
File ~/work/pandas/pandas/pandas/util/_decorators.py:211, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs)
209 else:
210 kwargs[new_arg_name] = new_arg_value
--> 211 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
325 if len(args) > num_allow_args:
326 warnings.warn(
327 msg.format(arguments=_format_argument_list(allow_args)),
328 FutureWarning,
329 stacklevel=find_stack_level(),
330 )
--> 331 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:950, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
935 kwds_defaults = _refine_defaults_read(
936 dialect,
937 delimiter,
(...)
946 defaults={"delimiter": ","},
947 )
948 kwds.update(kwds_defaults)
--> 950 return _read(filepath_or_buffer, kwds)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:611, in _read(filepath_or_buffer, kwds)
608 return parser
610 with parser:
--> 611 return parser.read(nrows)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:1778, in TextFileReader.read(self, nrows)
1771 nrows = validate_integer("nrows", nrows)
1772 try:
1773 # error: "ParserBase" has no attribute "read"
1774 (
1775 index,
1776 columns,
1777 col_dict,
-> 1778 ) = self._engine.read( # type: ignore[attr-defined]
1779 nrows
1780 )
1781 except Exception:
1782 self.close()
File ~/work/pandas/pandas/pandas/io/parsers/c_parser_wrapper.py:230, in CParserWrapper.read(self, nrows)
228 try:
229 if self.low_memory:
--> 230 chunks = self._reader.read_low_memory(nrows)
231 # destructive to chunks
232 data = _concatenate_chunks(chunks)
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:808, in pandas._libs.parsers.TextReader.read_low_memory()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:866, in pandas._libs.parsers.TextReader._read_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:852, in pandas._libs.parsers.TextReader._tokenize_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:1973, in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4
You can elect to skip bad lines:
In [29]: pd.read_csv(StringIO(data), on_bad_lines="warn")
Skipping line 3: expected 3 fields, saw 4
Out[29]:
a b c
0 1 2 3
1 8 9 10
Or pass a callable function to handle the bad line if engine="python".
The bad line will be a list of strings that was split by the sep:
In [29]: external_list = []
In [30]: def bad_lines_func(line):
...: external_list.append(line)
...: return line[-3:]
In [31]: pd.read_csv(StringIO(data), on_bad_lines=bad_lines_func, engine="python")
Out[31]:
a b c
0 1 2 3
1 5 6 7
2 8 9 10
In [32]: external_list
Out[32]: [4, 5, 6, 7]
.. versionadded:: 1.4.0
You can also use the usecols parameter to eliminate extraneous column
data that appear in some lines but not others:
In [33]: pd.read_csv(StringIO(data), usecols=[0, 1, 2])
Out[33]:
a b c
0 1 2 3
1 4 5 6
2 8 9 10
In case you want to keep all data including the lines with too many fields, you can
specify a sufficient number of names. This ensures that lines with not enough
fields are filled with NaN.
In [34]: pd.read_csv(StringIO(data), names=['a', 'b', 'c', 'd'])
Out[34]:
a b c d
0 1 2 3 NaN
1 4 5 6 7
2 8 9 10 NaN
Dialect#
The dialect keyword gives greater flexibility in specifying the file format.
By default it uses the Excel dialect but you can specify either the dialect name
or a csv.Dialect instance.
Suppose you had data with unenclosed quotes:
In [161]: data = "label1,label2,label3\n" 'index1,"a,c,e\n' "index2,b,d,f"
In [162]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f
By default, read_csv uses the Excel dialect and treats the double quote as
the quote character, which causes it to fail when it finds a newline before it
finds the closing double quote.
We can get around this using dialect:
In [163]: import csv
In [164]: dia = csv.excel()
In [165]: dia.quoting = csv.QUOTE_NONE
In [166]: pd.read_csv(StringIO(data), dialect=dia)
Out[166]:
label1 label2 label3
index1 "a c e
index2 b d f
All of the dialect options can be specified separately by keyword arguments:
In [167]: data = "a,b,c~1,2,3~4,5,6"
In [168]: pd.read_csv(StringIO(data), lineterminator="~")
Out[168]:
a b c
0 1 2 3
1 4 5 6
Another common dialect option is skipinitialspace, to skip any whitespace
after a delimiter:
In [169]: data = "a, b, c\n1, 2, 3\n4, 5, 6"
In [170]: print(data)
a, b, c
1, 2, 3
4, 5, 6
In [171]: pd.read_csv(StringIO(data), skipinitialspace=True)
Out[171]:
a b c
0 1 2 3
1 4 5 6
The parsers make every attempt to “do the right thing” and not be fragile. Type
inference is a pretty big deal. If a column can be coerced to integer dtype
without altering the contents, the parser will do so. Any non-numeric
columns will come through as object dtype as with the rest of pandas objects.
Quoting and Escape Characters#
Quotes (and other escape characters) in embedded fields can be handled in any
number of ways. One way is to use backslashes; to properly parse this data, you
should pass the escapechar option:
In [172]: data = 'a,b\n"hello, \\"Bob\\", nice to see you",5'
In [173]: print(data)
a,b
"hello, \"Bob\", nice to see you",5
In [174]: pd.read_csv(StringIO(data), escapechar="\\")
Out[174]:
a b
0 hello, "Bob", nice to see you 5
Files with fixed width columns#
While read_csv() reads delimited data, the read_fwf() function works
with data files that have known and fixed column widths. The function parameters
to read_fwf are largely the same as read_csv with two extra parameters, and
a different usage of the delimiter parameter:
colspecs: A list of pairs (tuples) giving the extents of the
fixed-width fields of each line as half-open intervals (i.e., [from, to[ ).
String value ‘infer’ can be used to instruct the parser to try detecting
the column specifications from the first 100 rows of the data. Default
behavior, if not specified, is to infer.
widths: A list of field widths which can be used instead of ‘colspecs’
if the intervals are contiguous.
delimiter: Characters to consider as filler characters in the fixed-width file.
Can be used to specify the filler character of the fields
if it is not spaces (e.g., ‘~’).
Consider a typical fixed-width data file:
In [175]: data1 = (
.....: "id8141 360.242940 149.910199 11950.7\n"
.....: "id1594 444.953632 166.985655 11788.4\n"
.....: "id1849 364.136849 183.628767 11806.2\n"
.....: "id1230 413.836124 184.375703 11916.8\n"
.....: "id1948 502.953953 173.237159 12468.3"
.....: )
.....:
In [176]: with open("bar.csv", "w") as f:
.....: f.write(data1)
.....:
In order to parse this file into a DataFrame, we simply need to supply the
column specifications to the read_fwf function along with the file name:
# Column specifications are a list of half-intervals
In [177]: colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)]
In [178]: df = pd.read_fwf("bar.csv", colspecs=colspecs, header=None, index_col=0)
In [179]: df
Out[179]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
Note how the parser automatically picks column names X.<column number> when
header=None argument is specified. Alternatively, you can supply just the
column widths for contiguous columns:
# Widths are a list of integers
In [180]: widths = [6, 14, 13, 10]
In [181]: df = pd.read_fwf("bar.csv", widths=widths, header=None)
In [182]: df
Out[182]:
0 1 2 3
0 id8141 360.242940 149.910199 11950.7
1 id1594 444.953632 166.985655 11788.4
2 id1849 364.136849 183.628767 11806.2
3 id1230 413.836124 184.375703 11916.8
4 id1948 502.953953 173.237159 12468.3
The parser will take care of extra white spaces around the columns
so it’s ok to have extra separation between the columns in the file.
By default, read_fwf will try to infer the file’s colspecs by using the
first 100 rows of the file. It can do it only in cases when the columns are
aligned and correctly separated by the provided delimiter (default delimiter
is whitespace).
In [183]: df = pd.read_fwf("bar.csv", header=None, index_col=0)
In [184]: df
Out[184]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
read_fwf supports the dtype parameter for specifying the types of
parsed columns to be different from the inferred type.
In [185]: pd.read_fwf("bar.csv", header=None, index_col=0).dtypes
Out[185]:
1 float64
2 float64
3 float64
dtype: object
In [186]: pd.read_fwf("bar.csv", header=None, dtype={2: "object"}).dtypes
Out[186]:
0 object
1 float64
2 object
3 float64
dtype: object
Indexes#
Files with an “implicit” index column#
Consider a file with one less entry in the header than the number of data
column:
In [187]: data = "A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5"
In [188]: print(data)
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
In [189]: with open("foo.csv", "w") as f:
.....: f.write(data)
.....:
In this special case, read_csv assumes that the first column is to be used
as the index of the DataFrame:
In [190]: pd.read_csv("foo.csv")
Out[190]:
A B C
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
Note that the dates weren’t automatically parsed. In that case you would need
to do as before:
In [191]: df = pd.read_csv("foo.csv", parse_dates=True)
In [192]: df.index
Out[192]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', freq=None)
Reading an index with a MultiIndex#
Suppose you have data indexed by two columns:
In [193]: data = 'year,indiv,zit,xit\n1977,"A",1.2,.6\n1977,"B",1.5,.5'
In [194]: print(data)
year,indiv,zit,xit
1977,"A",1.2,.6
1977,"B",1.5,.5
In [195]: with open("mindex_ex.csv", mode="w") as f:
.....: f.write(data)
.....:
The index_col argument to read_csv can take a list of
column numbers to turn multiple columns into a MultiIndex for the index of the
returned object:
In [196]: df = pd.read_csv("mindex_ex.csv", index_col=[0, 1])
In [197]: df
Out[197]:
zit xit
year indiv
1977 A 1.2 0.6
B 1.5 0.5
In [198]: df.loc[1977]
Out[198]:
zit xit
indiv
A 1.2 0.6
B 1.5 0.5
Reading columns with a MultiIndex#
By specifying list of row locations for the header argument, you
can read in a MultiIndex for the columns. Specifying non-consecutive
rows will skip the intervening rows.
In [199]: from pandas._testing import makeCustomDataframe as mkdf
In [200]: df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4)
In [201]: df.to_csv("mi.csv")
In [202]: print(open("mi.csv").read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [203]: pd.read_csv("mi.csv", header=[0, 1, 2, 3], index_col=[0, 1])
Out[203]:
C0 C_l0_g0 C_l0_g1 C_l0_g2
C1 C_l1_g0 C_l1_g1 C_l1_g2
C2 C_l2_g0 C_l2_g1 C_l2_g2
C3 C_l3_g0 C_l3_g1 C_l3_g2
R0 R1
R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2
R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2
R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2
R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2
R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2
read_csv is also able to interpret a more common format
of multi-columns indices.
In [204]: data = ",a,a,a,b,c,c\n,q,r,s,t,u,v\none,1,2,3,4,5,6\ntwo,7,8,9,10,11,12"
In [205]: print(data)
,a,a,a,b,c,c
,q,r,s,t,u,v
one,1,2,3,4,5,6
two,7,8,9,10,11,12
In [206]: with open("mi2.csv", "w") as fh:
.....: fh.write(data)
.....:
In [207]: pd.read_csv("mi2.csv", header=[0, 1], index_col=0)
Out[207]:
a b c
q r s t u v
one 1 2 3 4 5 6
two 7 8 9 10 11 12
Note
If an index_col is not specified (e.g. you don’t have an index, or wrote it
with df.to_csv(..., index=False), then any names on the columns index will
be lost.
Automatically “sniffing” the delimiter#
read_csv is capable of inferring delimited (not necessarily
comma-separated) files, as pandas uses the csv.Sniffer
class of the csv module. For this, you have to specify sep=None.
In [208]: df = pd.DataFrame(np.random.randn(10, 4))
In [209]: df.to_csv("tmp.csv", sep="|")
In [210]: df.to_csv("tmp2.csv", sep=":")
In [211]: pd.read_csv("tmp2.csv", sep=None, engine="python")
Out[211]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914
Reading multiple files to create a single DataFrame#
It’s best to use concat() to combine multiple files.
See the cookbook for an example.
Iterating through files chunk by chunk#
Suppose you wish to iterate through a (potentially very large) file lazily
rather than reading the entire file into memory, such as the following:
In [212]: df = pd.DataFrame(np.random.randn(10, 4))
In [213]: df.to_csv("tmp.csv", sep="|")
In [214]: table = pd.read_csv("tmp.csv", sep="|")
In [215]: table
Out[215]:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
By specifying a chunksize to read_csv, the return
value will be an iterable object of type TextFileReader:
In [216]: with pd.read_csv("tmp.csv", sep="|", chunksize=4) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
Unnamed: 0 0 1 2 3
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
Unnamed: 0 0 1 2 3
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
Changed in version 1.2: read_csv/json/sas return a context-manager when iterating through a file.
Specifying iterator=True will also return the TextFileReader object:
In [217]: with pd.read_csv("tmp.csv", sep="|", iterator=True) as reader:
.....: reader.get_chunk(5)
.....:
Specifying the parser engine#
Pandas currently supports three engines, the C engine, the python engine, and an experimental
pyarrow engine (requires the pyarrow package). In general, the pyarrow engine is fastest
on larger workloads and is equivalent in speed to the C engine on most other workloads.
The python engine tends to be slower than the pyarrow and C engines on most workloads. However,
the pyarrow engine is much less robust than the C engine, which lacks a few features compared to the
Python engine.
Where possible, pandas uses the C parser (specified as engine='c'), but it may fall
back to Python if C-unsupported options are specified.
Currently, options unsupported by the C and pyarrow engines include:
sep other than a single character (e.g. regex separators)
skipfooter
sep=None with delim_whitespace=False
Specifying any of the above options will produce a ParserWarning unless the
python engine is selected explicitly using engine='python'.
Options that are unsupported by the pyarrow engine which are not covered by the list above include:
float_precision
chunksize
comment
nrows
thousands
memory_map
dialect
warn_bad_lines
error_bad_lines
on_bad_lines
delim_whitespace
quoting
lineterminator
converters
decimal
iterator
dayfirst
infer_datetime_format
verbose
skipinitialspace
low_memory
Specifying these options with engine='pyarrow' will raise a ValueError.
Reading/writing remote files#
You can pass in a URL to read or write remote files to many of pandas’ IO
functions - the following example shows reading a CSV file:
df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")
New in version 1.3.0.
A custom header can be sent alongside HTTP(s) requests by passing a dictionary
of header key value mappings to the storage_options keyword argument as shown below:
headers = {"User-Agent": "pandas"}
df = pd.read_csv(
"https://download.bls.gov/pub/time.series/cu/cu.item",
sep="\t",
storage_options=headers
)
All URLs which are not local files or HTTP(s) are handled by
fsspec, if installed, and its various filesystem implementations
(including Amazon S3, Google Cloud, SSH, FTP, webHDFS…).
Some of these implementations will require additional packages to be
installed, for example
S3 URLs require the s3fs library:
df = pd.read_json("s3://pandas-test/adatafile.json")
When dealing with remote storage systems, you might need
extra configuration with environment variables or config files in
special locations. For example, to access data in your S3 bucket,
you will need to define credentials in one of the several ways listed in
the S3Fs documentation. The same is true
for several of the storage backends, and you should follow the links
at fsimpl1 for implementations built into fsspec and fsimpl2
for those not included in the main fsspec
distribution.
You can also pass parameters directly to the backend driver. For example,
if you do not have S3 credentials, you can still access public data by
specifying an anonymous connection, such as
New in version 1.2.0.
pd.read_csv(
"s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/SaKe2013"
"-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"anon": True},
)
fsspec also allows complex URLs, for accessing data in compressed
archives, local caching of files, and more. To locally cache the above
example, you would modify the call to
pd.read_csv(
"simplecache::s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/"
"SaKe2013-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"s3": {"anon": True}},
)
where we specify that the “anon” parameter is meant for the “s3” part of
the implementation, not to the caching implementation. Note that this caches to a temporary
directory for the duration of the session only, but you can also specify
a permanent store.
Writing out data#
Writing to CSV format#
The Series and DataFrame objects have an instance method to_csv which
allows storing the contents of the object as a comma-separated-values file. The
function takes a number of arguments. Only the first is required.
path_or_buf: A string path to the file to write or a file object. If a file object it must be opened with newline=''
sep : Field delimiter for the output file (default “,”)
na_rep: A string representation of a missing value (default ‘’)
float_format: Format string for floating point numbers
columns: Columns to write (default None)
header: Whether to write out the column names (default True)
index: whether to write row (index) names (default True)
index_label: Column label(s) for index column(s) if desired. If None
(default), and header and index are True, then the index names are
used. (A sequence should be given if the DataFrame uses MultiIndex).
mode : Python write mode, default ‘w’
encoding: a string representing the encoding to use if the contents are
non-ASCII, for Python versions prior to 3
lineterminator: Character sequence denoting line end (default os.linesep)
quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric
quotechar: Character used to quote fields (default ‘”’)
doublequote: Control quoting of quotechar in fields (default True)
escapechar: Character used to escape sep and quotechar when
appropriate (default None)
chunksize: Number of rows to write at a time
date_format: Format string for datetime objects
Writing a formatted string#
The DataFrame object has an instance method to_string which allows control
over the string representation of the object. All arguments are optional:
buf default None, for example a StringIO object
columns default None, which columns to write
col_space default None, minimum width of each column.
na_rep default NaN, representation of NA value
formatters default None, a dictionary (by column) of functions each of
which takes a single argument and returns a formatted string
float_format default None, a function which takes a single (float)
argument and returns a formatted string; to be applied to floats in the
DataFrame.
sparsify default True, set to False for a DataFrame with a hierarchical
index to print every MultiIndex key at each row.
index_names default True, will print the names of the indices
index default True, will print the index (ie, row labels)
header default True, will print the column labels
justify default left, will print column headers left- or
right-justified
The Series object also has a to_string method, but with only the buf,
na_rep, float_format arguments. There is also a length argument
which, if set to True, will additionally output the length of the Series.
JSON#
Read and write JSON format files and strings.
Writing JSON#
A Series or DataFrame can be converted to a valid JSON string. Use to_json
with optional parameters:
path_or_buf : the pathname or buffer to write the output
This can be None in which case a JSON string is returned
orient :
Series:
default is index
allowed values are {split, records, index}
DataFrame:
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
date_format : string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601.
double_precision : The number of decimal places to use when encoding floating point values, default 10.
force_ascii : force encoded string to be ASCII, default True.
date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or ‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.
default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
lines : If records orient, then will write each record per line as json.
Note NaN’s, NaT’s and None will be converted to null and datetime objects will be converted based on the date_format and date_unit parameters.
In [218]: dfj = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [219]: json = dfj.to_json()
In [220]: json
Out[220]: '{"A":{"0":-0.1213062281,"1":0.6957746499,"2":0.9597255933,"3":-0.6199759194,"4":-0.7323393705},"B":{"0":-0.0978826728,"1":0.3417343559,"2":-1.1103361029,"3":0.1497483186,"4":0.6877383895}}'
Orient options#
There are a number of different options for the format of the resulting JSON
file / string. Consider the following DataFrame and Series:
In [221]: dfjo = pd.DataFrame(
.....: dict(A=range(1, 4), B=range(4, 7), C=range(7, 10)),
.....: columns=list("ABC"),
.....: index=list("xyz"),
.....: )
.....:
In [222]: dfjo
Out[222]:
A B C
x 1 4 7
y 2 5 8
z 3 6 9
In [223]: sjo = pd.Series(dict(x=15, y=16, z=17), name="D")
In [224]: sjo
Out[224]:
x 15
y 16
z 17
Name: D, dtype: int64
Column oriented (the default for DataFrame) serializes the data as
nested JSON objects with column labels acting as the primary index:
In [225]: dfjo.to_json(orient="columns")
Out[225]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}'
# Not available for Series
Index oriented (the default for Series) similar to column oriented
but the index labels are now primary:
In [226]: dfjo.to_json(orient="index")
Out[226]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}'
In [227]: sjo.to_json(orient="index")
Out[227]: '{"x":15,"y":16,"z":17}'
Record oriented serializes the data to a JSON array of column -> value records,
index labels are not included. This is useful for passing DataFrame data to plotting
libraries, for example the JavaScript library d3.js:
In [228]: dfjo.to_json(orient="records")
Out[228]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]'
In [229]: sjo.to_json(orient="records")
Out[229]: '[15,16,17]'
Value oriented is a bare-bones option which serializes to nested JSON arrays of
values only, column and index labels are not included:
In [230]: dfjo.to_json(orient="values")
Out[230]: '[[1,4,7],[2,5,8],[3,6,9]]'
# Not available for Series
Split oriented serializes to a JSON object containing separate entries for
values, index and columns. Name is also included for Series:
In [231]: dfjo.to_json(orient="split")
Out[231]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,6,9]]}'
In [232]: sjo.to_json(orient="split")
Out[232]: '{"name":"D","index":["x","y","z"],"data":[15,16,17]}'
Table oriented serializes to the JSON Table Schema, allowing for the
preservation of metadata including but not limited to dtypes and index names.
Note
Any orient option that encodes to a JSON object will not preserve the ordering of
index and column labels during round-trip serialization. If you wish to preserve
label ordering use the split option as it uses ordered containers.
Date handling#
Writing in ISO date format:
In [233]: dfd = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [234]: dfd["date"] = pd.Timestamp("20130101")
In [235]: dfd = dfd.sort_index(axis=1, ascending=False)
In [236]: json = dfd.to_json(date_format="iso")
In [237]: json
Out[237]: '{"date":{"0":"2013-01-01T00:00:00.000","1":"2013-01-01T00:00:00.000","2":"2013-01-01T00:00:00.000","3":"2013-01-01T00:00:00.000","4":"2013-01-01T00:00:00.000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing in ISO date format, with microseconds:
In [238]: json = dfd.to_json(date_format="iso", date_unit="us")
In [239]: json
Out[239]: '{"date":{"0":"2013-01-01T00:00:00.000000","1":"2013-01-01T00:00:00.000000","2":"2013-01-01T00:00:00.000000","3":"2013-01-01T00:00:00.000000","4":"2013-01-01T00:00:00.000000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Epoch timestamps, in seconds:
In [240]: json = dfd.to_json(date_format="epoch", date_unit="s")
In [241]: json
Out[241]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4":1356998400},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing to a file, with a date index and a date column:
In [242]: dfj2 = dfj.copy()
In [243]: dfj2["date"] = pd.Timestamp("20130101")
In [244]: dfj2["ints"] = list(range(5))
In [245]: dfj2["bools"] = True
In [246]: dfj2.index = pd.date_range("20130101", periods=5)
In [247]: dfj2.to_json("test.json")
In [248]: with open("test.json") as fh:
.....: print(fh.read())
.....:
{"A":{"1356998400000":-0.1213062281,"1357084800000":0.6957746499,"1357171200000":0.9597255933,"1357257600000":-0.6199759194,"1357344000000":-0.7323393705},"B":{"1356998400000":-0.0978826728,"1357084800000":0.3417343559,"1357171200000":-1.1103361029,"1357257600000":0.1497483186,"1357344000000":0.6877383895},"date":{"1356998400000":1356998400000,"1357084800000":1356998400000,"1357171200000":1356998400000,"1357257600000":1356998400000,"1357344000000":1356998400000},"ints":{"1356998400000":0,"1357084800000":1,"1357171200000":2,"1357257600000":3,"1357344000000":4},"bools":{"1356998400000":true,"1357084800000":true,"1357171200000":true,"1357257600000":true,"1357344000000":true}}
Fallback behavior#
If the JSON serializer cannot handle the container contents directly it will
fall back in the following manner:
if the dtype is unsupported (e.g. np.complex_) then the default_handler, if provided, will be called
for each value, otherwise an exception is raised.
if an object is unsupported it will attempt the following:
check if the object has defined a toDict method and call it.
A toDict method should return a dict which will then be JSON serialized.
invoke the default_handler if one was provided.
convert the object to a dict by traversing its contents. However this will often fail
with an OverflowError or give unexpected results.
In general the best approach for unsupported objects or dtypes is to provide a default_handler.
For example:
>>> DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json() # raises
RuntimeError: Unhandled numpy dtype 15
can be dealt with by specifying a simple default_handler:
In [249]: pd.DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json(default_handler=str)
Out[249]: '{"0":{"0":"(1+0j)","1":"(2+0j)","2":"(1+2j)"}}'
Reading JSON#
Reading a JSON string to pandas object can take a number of parameters.
The parser will try to parse a DataFrame if typ is not supplied or
is None. To explicitly force Series parsing, pass typ=series
filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be
a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host
is expected. For instance, a local file could be
file ://localhost/path/to/table.json
typ : type of object to recover (series or frame), default ‘frame’
orient :
Series :
default is index
allowed values are {split, records, index}
DataFrame
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at all, default is True, apply only to the data.
convert_axes : boolean, try to convert the axes to the proper dtypes, default is True
convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default is True.
keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns.
numpy : direct decoding to NumPy arrays. default is False;
Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True.
precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
date_unit : string, the timestamp unit to detect if converting dates. Default
None. By default the timestamp precision will be detected, if this is not desired
then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp precision to
seconds, milliseconds, microseconds or nanoseconds respectively.
lines : reads file as one json object per line.
encoding : The encoding to use to decode py3 bytes.
chunksize : when used in combination with lines=True, return a JsonReader which reads in chunksize lines per iteration.
The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.
If a non-default orient was used when encoding to JSON be sure to pass the same
option here so that decoding produces sensible results, see Orient Options for an
overview.
Data conversion#
The default of convert_axes=True, dtype=True, and convert_dates=True
will try to parse the axes, and all of the data into appropriate types,
including dates. If you need to override specific dtypes, pass a dict to
dtype. convert_axes should only be set to False if you need to
preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.
Note
Large integer values may be converted to dates if convert_dates=True and the data and / or column labels appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label meets one of the following criteria:
it ends with '_at'
it ends with '_time'
it begins with 'timestamp'
it is 'modified'
it is 'date'
Warning
When reading JSON data, automatic coercing into dtypes has some quirks:
an index can be reconstructed in a different order from serialization, that is, the returned order is not guaranteed to be the same as before serialization
a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.
bool columns will be converted to integer on reconstruction
Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.
Reading from a JSON string:
In [250]: pd.read_json(json)
Out[250]:
date B A
0 2013-01-01 0.403310 0.176444
1 2013-01-01 0.301624 -0.154951
2 2013-01-01 -1.369849 -2.179861
3 2013-01-01 1.462696 -0.954208
4 2013-01-01 -0.826591 -1.743161
Reading from a file:
In [251]: pd.read_json("test.json")
Out[251]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
Don’t convert any data (but still convert axes and dates):
In [252]: pd.read_json("test.json", dtype=object).dtypes
Out[252]:
A object
B object
date object
ints object
bools object
dtype: object
Specify dtypes for conversion:
In [253]: pd.read_json("test.json", dtype={"A": "float32", "bools": "int8"}).dtypes
Out[253]:
A float32
B float64
date datetime64[ns]
ints int64
bools int8
dtype: object
Preserve string indices:
In [254]: si = pd.DataFrame(
.....: np.zeros((4, 4)), columns=list(range(4)), index=[str(i) for i in range(4)]
.....: )
.....:
In [255]: si
Out[255]:
0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
In [256]: si.index
Out[256]: Index(['0', '1', '2', '3'], dtype='object')
In [257]: si.columns
Out[257]: Int64Index([0, 1, 2, 3], dtype='int64')
In [258]: json = si.to_json()
In [259]: sij = pd.read_json(json, convert_axes=False)
In [260]: sij
Out[260]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
In [261]: sij.index
Out[261]: Index(['0', '1', '2', '3'], dtype='object')
In [262]: sij.columns
Out[262]: Index(['0', '1', '2', '3'], dtype='object')
Dates written in nanoseconds need to be read back in nanoseconds:
In [263]: json = dfj2.to_json(date_unit="ns")
# Try to parse timestamps as milliseconds -> Won't Work
In [264]: dfju = pd.read_json(json, date_unit="ms")
In [265]: dfju
Out[265]:
A B date ints bools
1356998400000000000 -0.121306 -0.097883 1356998400000000000 0 True
1357084800000000000 0.695775 0.341734 1356998400000000000 1 True
1357171200000000000 0.959726 -1.110336 1356998400000000000 2 True
1357257600000000000 -0.619976 0.149748 1356998400000000000 3 True
1357344000000000000 -0.732339 0.687738 1356998400000000000 4 True
# Let pandas detect the correct precision
In [266]: dfju = pd.read_json(json)
In [267]: dfju
Out[267]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
# Or specify that all timestamps are in nanoseconds
In [268]: dfju = pd.read_json(json, date_unit="ns")
In [269]: dfju
Out[269]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
The Numpy parameter#
Note
This param has been deprecated as of version 1.0.0 and will raise a FutureWarning.
This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If numpy=True is passed to read_json an attempt will be made to sniff
an appropriate dtype during deserialization and to subsequently decode directly
to NumPy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric
data:
In [270]: randfloats = np.random.uniform(-100, 1000, 10000)
In [271]: randfloats.shape = (1000, 10)
In [272]: dffloats = pd.DataFrame(randfloats, columns=list("ABCDEFGHIJ"))
In [273]: jsonfloats = dffloats.to_json()
In [274]: %timeit pd.read_json(jsonfloats)
7.91 ms +- 77.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [275]: %timeit pd.read_json(jsonfloats, numpy=True)
5.71 ms +- 333 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
The speedup is less noticeable for smaller datasets:
In [276]: jsonfloats = dffloats.head(100).to_json()
In [277]: %timeit pd.read_json(jsonfloats)
4.46 ms +- 25.9 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [278]: %timeit pd.read_json(jsonfloats, numpy=True)
4.09 ms +- 32.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Warning
Direct NumPy decoding makes a number of assumptions and may fail or produce
unexpected output if these assumptions are not satisfied:
data is numeric.
data is uniform. The dtype is sniffed from the first value decoded.
A ValueError may be raised, or incorrect output may be produced
if this condition is not satisfied.
labels are ordered. Labels are only read from the first container, it is assumed
that each subsequent row / column has been encoded in the same order. This should be satisfied if the
data was encoded using to_json but may not be the case if the JSON
is from another source.
Normalization#
pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data
into a flat table.
In [279]: data = [
.....: {"id": 1, "name": {"first": "Coleen", "last": "Volk"}},
.....: {"name": {"given": "Mark", "family": "Regner"}},
.....: {"id": 2, "name": "Faye Raker"},
.....: ]
.....:
In [280]: pd.json_normalize(data)
Out[280]:
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mark Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker
In [281]: data = [
.....: {
.....: "state": "Florida",
.....: "shortname": "FL",
.....: "info": {"governor": "Rick Scott"},
.....: "county": [
.....: {"name": "Dade", "population": 12345},
.....: {"name": "Broward", "population": 40000},
.....: {"name": "Palm Beach", "population": 60000},
.....: ],
.....: },
.....: {
.....: "state": "Ohio",
.....: "shortname": "OH",
.....: "info": {"governor": "John Kasich"},
.....: "county": [
.....: {"name": "Summit", "population": 1234},
.....: {"name": "Cuyahoga", "population": 1337},
.....: ],
.....: },
.....: ]
.....:
In [282]: pd.json_normalize(data, "county", ["state", "shortname", ["info", "governor"]])
Out[282]:
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
The max_level parameter provides more control over which level to end normalization.
With max_level=1 the following snippet normalizes until 1st nesting level of the provided dict.
In [283]: data = [
.....: {
.....: "CreatedBy": {"Name": "User001"},
.....: "Lookup": {
.....: "TextField": "Some text",
.....: "UserField": {"Id": "ID001", "Name": "Name001"},
.....: },
.....: "Image": {"a": "b"},
.....: }
.....: ]
.....:
In [284]: pd.json_normalize(data, max_level=1)
Out[284]:
CreatedBy.Name Lookup.TextField Lookup.UserField Image.a
0 User001 Some text {'Id': 'ID001', 'Name': 'Name001'} b
Line delimited json#
pandas is able to read and write line-delimited json files that are common in data processing pipelines
using Hadoop or Spark.
For line-delimited json files, pandas can also return an iterator which reads in chunksize lines at a time. This can be useful for large files or to read from a stream.
In [285]: jsonl = """
.....: {"a": 1, "b": 2}
.....: {"a": 3, "b": 4}
.....: """
.....:
In [286]: df = pd.read_json(jsonl, lines=True)
In [287]: df
Out[287]:
a b
0 1 2
1 3 4
In [288]: df.to_json(orient="records", lines=True)
Out[288]: '{"a":1,"b":2}\n{"a":3,"b":4}\n'
# reader is an iterator that returns ``chunksize`` lines each iteration
In [289]: with pd.read_json(StringIO(jsonl), lines=True, chunksize=1) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Empty DataFrame
Columns: []
Index: []
a b
0 1 2
a b
1 3 4
Table schema#
Table Schema is a spec for describing tabular datasets as a JSON
object. The JSON includes information on the field names, types, and
other attributes. You can use the orient table to build
a JSON string with two fields, schema and data.
In [290]: df = pd.DataFrame(
.....: {
.....: "A": [1, 2, 3],
.....: "B": ["a", "b", "c"],
.....: "C": pd.date_range("2016-01-01", freq="d", periods=3),
.....: },
.....: index=pd.Index(range(3), name="idx"),
.....: )
.....:
In [291]: df
Out[291]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
In [292]: df.to_json(orient="table", date_format="iso")
Out[292]: '{"schema":{"fields":[{"name":"idx","type":"integer"},{"name":"A","type":"integer"},{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],"pandas_version":"1.4.0"},"data":[{"idx":0,"A":1,"B":"a","C":"2016-01-01T00:00:00.000"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000"},{"idx":2,"A":3,"B":"c","C":"2016-01-03T00:00:00.000"}]}'
The schema field contains the fields key, which itself contains
a list of column name to type pairs, including the Index or MultiIndex
(see below for a list of types).
The schema field also contains a primaryKey field if the (Multi)index
is unique.
The second field, data, contains the serialized data with the records
orient.
The index is included, and any datetimes are ISO 8601 formatted, as required
by the Table Schema spec.
The full list of types supported are described in the Table Schema
spec. This table shows the mapping from pandas types:
pandas type
Table Schema type
int64
integer
float64
number
bool
boolean
datetime64[ns]
datetime
timedelta64[ns]
duration
categorical
any
object
str
A few notes on the generated table schema:
The schema object contains a pandas_version field. This contains
the version of pandas’ dialect of the schema, and will be incremented
with each revision.
All dates are converted to UTC when serializing. Even timezone naive values,
which are treated as UTC with an offset of 0.
In [293]: from pandas.io.json import build_table_schema
In [294]: s = pd.Series(pd.date_range("2016", periods=4))
In [295]: build_table_schema(s)
Out[295]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
datetimes with a timezone (before serializing), include an additional field
tz with the time zone name (e.g. 'US/Central').
In [296]: s_tz = pd.Series(pd.date_range("2016", periods=12, tz="US/Central"))
In [297]: build_table_schema(s_tz)
Out[297]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Periods are converted to timestamps before serialization, and so have the
same behavior of being converted to UTC. In addition, periods will contain
and additional field freq with the period’s frequency, e.g. 'A-DEC'.
In [298]: s_per = pd.Series(1, index=pd.period_range("2016", freq="A-DEC", periods=4))
In [299]: build_table_schema(s_per)
Out[299]:
{'fields': [{'name': 'index', 'type': 'datetime', 'freq': 'A-DEC'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Categoricals use the any type and an enum constraint listing
the set of possible values. Additionally, an ordered field is included:
In [300]: s_cat = pd.Series(pd.Categorical(["a", "b", "a"]))
In [301]: build_table_schema(s_cat)
Out[301]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values',
'type': 'any',
'constraints': {'enum': ['a', 'b']},
'ordered': False}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
A primaryKey field, containing an array of labels, is included
if the index is unique:
In [302]: s_dupe = pd.Series([1, 2], index=[1, 1])
In [303]: build_table_schema(s_dupe)
Out[303]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '1.4.0'}
The primaryKey behavior is the same with MultiIndexes, but in this
case the primaryKey is an array:
In [304]: s_multi = pd.Series(1, index=pd.MultiIndex.from_product([("a", "b"), (0, 1)]))
In [305]: build_table_schema(s_multi)
Out[305]:
{'fields': [{'name': 'level_0', 'type': 'string'},
{'name': 'level_1', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': FrozenList(['level_0', 'level_1']),
'pandas_version': '1.4.0'}
The default naming roughly follows these rules:
For series, the object.name is used. If that’s none, then the
name is values
For DataFrames, the stringified version of the column name is used
For Index (not MultiIndex), index.name is used, with a
fallback to index if that is None.
For MultiIndex, mi.names is used. If any level has no name,
then level_<i> is used.
read_json also accepts orient='table' as an argument. This allows for
the preservation of metadata such as dtypes and index names in a
round-trippable manner.
In [306]: df = pd.DataFrame(
.....: {
.....: "foo": [1, 2, 3, 4],
.....: "bar": ["a", "b", "c", "d"],
.....: "baz": pd.date_range("2018-01-01", freq="d", periods=4),
.....: "qux": pd.Categorical(["a", "b", "c", "c"]),
.....: },
.....: index=pd.Index(range(4), name="idx"),
.....: )
.....:
In [307]: df
Out[307]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [308]: df.dtypes
Out[308]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
In [309]: df.to_json("test.json", orient="table")
In [310]: new_df = pd.read_json("test.json", orient="table")
In [311]: new_df
Out[311]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [312]: new_df.dtypes
Out[312]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
Please note that the literal string ‘index’ as the name of an Index
is not round-trippable, nor are any names beginning with 'level_' within a
MultiIndex. These are used by default in DataFrame.to_json() to
indicate missing values and the subsequent read cannot distinguish the intent.
In [313]: df.index.name = "index"
In [314]: df.to_json("test.json", orient="table")
In [315]: new_df = pd.read_json("test.json", orient="table")
In [316]: print(new_df.index.name)
None
When using orient='table' along with user-defined ExtensionArray,
the generated schema will contain an additional extDtype key in the respective
fields element. This extra key is not standard but does enable JSON roundtrips
for extension types (e.g. read_json(df.to_json(orient="table"), orient="table")).
The extDtype key carries the name of the extension, if you have properly registered
the ExtensionDtype, pandas will use said name to perform a lookup into the registry
and re-convert the serialized data into your custom dtype.
HTML#
Reading HTML content#
Warning
We highly encourage you to read the HTML Table Parsing gotchas
below regarding the issues surrounding the BeautifulSoup4/html5lib/lxml parsers.
The top-level read_html() function can accept an HTML
string/file/URL and will parse HTML tables into list of pandas DataFrames.
Let’s look at a few examples.
Note
read_html returns a list of DataFrame objects, even if there is
only a single table contained in the HTML content.
Read a URL with no options:
In [320]: "https://www.fdic.gov/resources/resolutions/bank-failures/failed-bank-list"
In [321]: pd.read_html(url)
Out[321]:
[ Bank NameBank CityCity StateSt ... Acquiring InstitutionAI Closing DateClosing FundFund
0 Almena State Bank Almena KS ... Equity Bank October 23, 2020 10538
1 First City Bank of Florida Fort Walton Beach FL ... United Fidelity Bank, fsb October 16, 2020 10537
2 The First State Bank Barboursville WV ... MVB Bank, Inc. April 3, 2020 10536
3 Ericson State Bank Ericson NE ... Farmers and Merchants Bank February 14, 2020 10535
4 City National Bank of New Jersey Newark NJ ... Industrial Bank November 1, 2019 10534
.. ... ... ... ... ... ... ...
558 Superior Bank, FSB Hinsdale IL ... Superior Federal, FSB July 27, 2001 6004
559 Malta National Bank Malta OH ... North Valley Bank May 3, 2001 4648
560 First Alliance Bank & Trust Co. Manchester NH ... Southern New Hampshire Bank & Trust February 2, 2001 4647
561 National State Bank of Metropolis Metropolis IL ... Banterra Bank of Marion December 14, 2000 4646
562 Bank of Honolulu Honolulu HI ... Bank of the Orient October 13, 2000 4645
[563 rows x 7 columns]]
Note
The data from the above URL changes every Monday so the resulting data above may be slightly different.
Read in the content of the file from the above URL and pass it to read_html
as a string:
In [317]: html_str = """
.....: <table>
.....: <tr>
.....: <th>A</th>
.....: <th colspan="1">B</th>
.....: <th rowspan="1">C</th>
.....: </tr>
.....: <tr>
.....: <td>a</td>
.....: <td>b</td>
.....: <td>c</td>
.....: </tr>
.....: </table>
.....: """
.....:
In [318]: with open("tmp.html", "w") as f:
.....: f.write(html_str)
.....:
In [319]: df = pd.read_html("tmp.html")
In [320]: df[0]
Out[320]:
A B C
0 a b c
You can even pass in an instance of StringIO if you so desire:
In [321]: dfs = pd.read_html(StringIO(html_str))
In [322]: dfs[0]
Out[322]:
A B C
0 a b c
Note
The following examples are not run by the IPython evaluator due to the fact
that having so many network-accessing functions slows down the documentation
build. If you spot an error or an example that doesn’t run, please do not
hesitate to report it over on pandas GitHub issues page.
Read a URL and match a table that contains specific text:
match = "Metcalf Bank"
df_list = pd.read_html(url, match=match)
Specify a header row (by default <th> or <td> elements located within a
<thead> are used to form the column index, if multiple rows are contained within
<thead> then a MultiIndex is created); if specified, the header row is taken
from the data minus the parsed header elements (<th> elements).
dfs = pd.read_html(url, header=0)
Specify an index column:
dfs = pd.read_html(url, index_col=0)
Specify a number of rows to skip:
dfs = pd.read_html(url, skiprows=0)
Specify a number of rows to skip using a list (range works
as well):
dfs = pd.read_html(url, skiprows=range(2))
Specify an HTML attribute:
dfs1 = pd.read_html(url, attrs={"id": "table"})
dfs2 = pd.read_html(url, attrs={"class": "sortable"})
print(np.array_equal(dfs1[0], dfs2[0])) # Should be True
Specify values that should be converted to NaN:
dfs = pd.read_html(url, na_values=["No Acquirer"])
Specify whether to keep the default set of NaN values:
dfs = pd.read_html(url, keep_default_na=False)
Specify converters for columns. This is useful for numerical text data that has
leading zeros. By default columns that are numerical are cast to numeric
types and the leading zeros are lost. To avoid this, we can convert these
columns to strings.
url_mcc = "https://en.wikipedia.org/wiki/Mobile_country_code"
dfs = pd.read_html(
url_mcc,
match="Telekom Albania",
header=0,
converters={"MNC": str},
)
Use some combination of the above:
dfs = pd.read_html(url, match="Metcalf Bank", index_col=0)
Read in pandas to_html output (with some loss of floating point precision):
df = pd.DataFrame(np.random.randn(2, 2))
s = df.to_html(float_format="{0:.40g}".format)
dfin = pd.read_html(s, index_col=0)
The lxml backend will raise an error on a failed parse if that is the only
parser you provide. If you only have a single parser you can provide just a
string, but it is considered good practice to pass a list with one string if,
for example, the function expects a sequence of strings. You may use:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml"])
Or you could pass flavor='lxml' without a list:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor="lxml")
However, if you have bs4 and html5lib installed and pass None or ['lxml',
'bs4'] then the parse will most likely succeed. Note that as soon as a parse
succeeds, the function will return.
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml", "bs4"])
Links can be extracted from cells along with the text using extract_links="all".
In [323]: html_table = """
.....: <table>
.....: <tr>
.....: <th>GitHub</th>
.....: </tr>
.....: <tr>
.....: <td><a href="https://github.com/pandas-dev/pandas">pandas</a></td>
.....: </tr>
.....: </table>
.....: """
.....:
In [324]: df = pd.read_html(
.....: html_table,
.....: extract_links="all"
.....: )[0]
.....:
In [325]: df
Out[325]:
(GitHub, None)
0 (pandas, https://github.com/pandas-dev/pandas)
In [326]: df[("GitHub", None)]
Out[326]:
0 (pandas, https://github.com/pandas-dev/pandas)
Name: (GitHub, None), dtype: object
In [327]: df[("GitHub", None)].str[1]
Out[327]:
0 https://github.com/pandas-dev/pandas
Name: (GitHub, None), dtype: object
New in version 1.5.0.
Writing to HTML files#
DataFrame objects have an instance method to_html which renders the
contents of the DataFrame as an HTML table. The function arguments are as
in the method to_string described above.
Note
Not all of the possible options for DataFrame.to_html are shown here for
brevity’s sake. See to_html() for the
full set of options.
Note
In an HTML-rendering supported environment like a Jupyter Notebook, display(HTML(...))`
will render the raw HTML into the environment.
In [328]: from IPython.display import display, HTML
In [329]: df = pd.DataFrame(np.random.randn(2, 2))
In [330]: df
Out[330]:
0 1
0 0.070319 1.773907
1 0.253908 0.414581
In [331]: html = df.to_html()
In [332]: print(html) # raw html
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [333]: display(HTML(html))
<IPython.core.display.HTML object>
The columns argument will limit the columns shown:
In [334]: html = df.to_html(columns=[0])
In [335]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
</tr>
</tbody>
</table>
In [336]: display(HTML(html))
<IPython.core.display.HTML object>
float_format takes a Python callable to control the precision of floating
point values:
In [337]: html = df.to_html(float_format="{0:.10f}".format)
In [338]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0703192665</td>
<td>1.7739074228</td>
</tr>
<tr>
<th>1</th>
<td>0.2539083433</td>
<td>0.4145805920</td>
</tr>
</tbody>
</table>
In [339]: display(HTML(html))
<IPython.core.display.HTML object>
bold_rows will make the row labels bold by default, but you can turn that
off:
In [340]: html = df.to_html(bold_rows=False)
In [341]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<td>1</td>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [342]: display(HTML(html))
<IPython.core.display.HTML object>
The classes argument provides the ability to give the resulting HTML
table CSS classes. Note that these classes are appended to the existing
'dataframe' class.
In [343]: print(df.to_html(classes=["awesome_table_class", "even_more_awesome_class"]))
<table border="1" class="dataframe awesome_table_class even_more_awesome_class">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
The render_links argument provides the ability to add hyperlinks to cells
that contain URLs.
In [344]: url_df = pd.DataFrame(
.....: {
.....: "name": ["Python", "pandas"],
.....: "url": ["https://www.python.org/", "https://pandas.pydata.org"],
.....: }
.....: )
.....:
In [345]: html = url_df.to_html(render_links=True)
In [346]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>url</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Python</td>
<td><a href="https://www.python.org/" target="_blank">https://www.python.org/</a></td>
</tr>
<tr>
<th>1</th>
<td>pandas</td>
<td><a href="https://pandas.pydata.org" target="_blank">https://pandas.pydata.org</a></td>
</tr>
</tbody>
</table>
In [347]: display(HTML(html))
<IPython.core.display.HTML object>
Finally, the escape argument allows you to control whether the
“<”, “>” and “&” characters escaped in the resulting HTML (by default it is
True). So to get the HTML without escaped characters pass escape=False
In [348]: df = pd.DataFrame({"a": list("&<>"), "b": np.random.randn(3)})
Escaped:
In [349]: html = df.to_html()
In [350]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [351]: display(HTML(html))
<IPython.core.display.HTML object>
Not escaped:
In [352]: html = df.to_html(escape=False)
In [353]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [354]: display(HTML(html))
<IPython.core.display.HTML object>
Note
Some browsers may not show a difference in the rendering of the previous two
HTML tables.
HTML Table Parsing Gotchas#
There are some versioning issues surrounding the libraries that are used to
parse HTML tables in the top-level pandas io function read_html.
Issues with lxml
Benefits
lxml is very fast.
lxml requires Cython to install correctly.
Drawbacks
lxml does not make any guarantees about the results of its parse
unless it is given strictly valid markup.
In light of the above, we have chosen to allow you, the user, to use the
lxml backend, but this backend will use html5lib if lxml
fails to parse
It is therefore highly recommended that you install both
BeautifulSoup4 and html5lib, so that you will still get a valid
result (provided everything else is valid) even if lxml fails.
Issues with BeautifulSoup4 using lxml as a backend
The above issues hold here as well since BeautifulSoup4 is essentially
just a wrapper around a parser backend.
Issues with BeautifulSoup4 using html5lib as a backend
Benefits
html5lib is far more lenient than lxml and consequently deals
with real-life markup in a much saner way rather than just, e.g.,
dropping an element without notifying you.
html5lib generates valid HTML5 markup from invalid markup
automatically. This is extremely important for parsing HTML tables,
since it guarantees a valid document. However, that does NOT mean that
it is “correct”, since the process of fixing markup does not have a
single definition.
html5lib is pure Python and requires no additional build steps beyond
its own installation.
Drawbacks
The biggest drawback to using html5lib is that it is slow as
molasses. However consider the fact that many tables on the web are not
big enough for the parsing algorithm runtime to matter. It is more
likely that the bottleneck will be in the process of reading the raw
text from the URL over the web, i.e., IO (input-output). For very large
tables, this might not be true.
LaTeX#
New in version 1.3.0.
Currently there are no methods to read from LaTeX, only output methods.
Writing to LaTeX files#
Note
DataFrame and Styler objects currently have a to_latex method. We recommend
using the Styler.to_latex() method
over DataFrame.to_latex() due to the former’s greater flexibility with
conditional styling, and the latter’s possible future deprecation.
Review the documentation for Styler.to_latex,
which gives examples of conditional styling and explains the operation of its keyword
arguments.
For simple application the following pattern is sufficient.
In [355]: df = pd.DataFrame([[1, 2], [3, 4]], index=["a", "b"], columns=["c", "d"])
In [356]: print(df.style.to_latex())
\begin{tabular}{lrr}
& c & d \\
a & 1 & 2 \\
b & 3 & 4 \\
\end{tabular}
To format values before output, chain the Styler.format
method.
In [357]: print(df.style.format("€ {}").to_latex())
\begin{tabular}{lrr}
& c & d \\
a & € 1 & € 2 \\
b & € 3 & € 4 \\
\end{tabular}
XML#
Reading XML#
New in version 1.3.0.
The top-level read_xml() function can accept an XML
string/file/URL and will parse nodes and attributes into a pandas DataFrame.
Note
Since there is no standard XML structure where design types can vary in
many ways, read_xml works best with flatter, shallow versions. If
an XML document is deeply nested, use the stylesheet feature to
transform XML into a flatter version.
Let’s look at a few examples.
Read an XML string:
In [358]: xml = """<?xml version="1.0" encoding="UTF-8"?>
.....: <bookstore>
.....: <book category="cooking">
.....: <title lang="en">Everyday Italian</title>
.....: <author>Giada De Laurentiis</author>
.....: <year>2005</year>
.....: <price>30.00</price>
.....: </book>
.....: <book category="children">
.....: <title lang="en">Harry Potter</title>
.....: <author>J K. Rowling</author>
.....: <year>2005</year>
.....: <price>29.99</price>
.....: </book>
.....: <book category="web">
.....: <title lang="en">Learning XML</title>
.....: <author>Erik T. Ray</author>
.....: <year>2003</year>
.....: <price>39.95</price>
.....: </book>
.....: </bookstore>"""
.....:
In [359]: df = pd.read_xml(xml)
In [360]: df
Out[360]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read a URL with no options:
In [361]: df = pd.read_xml("https://www.w3schools.com/xml/books.xml")
In [362]: df
Out[362]:
category title author year price cover
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 None
1 children Harry Potter J K. Rowling 2005 29.99 None
2 web XQuery Kick Start Vaidyanathan Nagarajan 2003 49.99 None
3 web Learning XML Erik T. Ray 2003 39.95 paperback
Read in the content of the “books.xml” file and pass it to read_xml
as a string:
In [363]: file_path = "books.xml"
In [364]: with open(file_path, "w") as f:
.....: f.write(xml)
.....:
In [365]: with open(file_path, "r") as f:
.....: df = pd.read_xml(f.read())
.....:
In [366]: df
Out[366]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read in the content of the “books.xml” as instance of StringIO or
BytesIO and pass it to read_xml:
In [367]: with open(file_path, "r") as f:
.....: sio = StringIO(f.read())
.....:
In [368]: df = pd.read_xml(sio)
In [369]: df
Out[369]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
In [370]: with open(file_path, "rb") as f:
.....: bio = BytesIO(f.read())
.....:
In [371]: df = pd.read_xml(bio)
In [372]: df
Out[372]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Even read XML from AWS S3 buckets such as NIH NCBI PMC Article Datasets providing
Biomedical and Life Science Jorurnals:
In [373]: df = pd.read_xml(
.....: "s3://pmc-oa-opendata/oa_comm/xml/all/PMC1236943.xml",
.....: xpath=".//journal-meta",
.....: )
.....:
In [374]: df
Out[374]:
journal-id journal-title issn publisher
0 Cardiovasc Ultrasound Cardiovascular Ultrasound 1476-7120 NaN
With lxml as default parser, you access the full-featured XML library
that extends Python’s ElementTree API. One powerful tool is ability to query
nodes selectively or conditionally with more expressive XPath:
In [375]: df = pd.read_xml(file_path, xpath="//book[year=2005]")
In [376]: df
Out[376]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
Specify only elements or only attributes to parse:
In [377]: df = pd.read_xml(file_path, elems_only=True)
In [378]: df
Out[378]:
title author year price
0 Everyday Italian Giada De Laurentiis 2005 30.00
1 Harry Potter J K. Rowling 2005 29.99
2 Learning XML Erik T. Ray 2003 39.95
In [379]: df = pd.read_xml(file_path, attrs_only=True)
In [380]: df
Out[380]:
category
0 cooking
1 children
2 web
XML documents can have namespaces with prefixes and default namespaces without
prefixes both of which are denoted with a special attribute xmlns. In order
to parse by node under a namespace context, xpath must reference a prefix.
For example, below XML contains a namespace with prefix, doc, and URI at
https://example.com. In order to parse doc:row nodes,
namespaces must be used.
In [381]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <doc:data xmlns:doc="https://example.com">
.....: <doc:row>
.....: <doc:shape>square</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides>4.0</doc:sides>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>circle</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides/>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>triangle</doc:shape>
.....: <doc:degrees>180</doc:degrees>
.....: <doc:sides>3.0</doc:sides>
.....: </doc:row>
.....: </doc:data>"""
.....:
In [382]: df = pd.read_xml(xml,
.....: xpath="//doc:row",
.....: namespaces={"doc": "https://example.com"})
.....:
In [383]: df
Out[383]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
Similarly, an XML document can have a default namespace without prefix. Failing
to assign a temporary prefix will return no nodes and raise a ValueError.
But assigning any temporary name to correct URI allows parsing by nodes.
In [384]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <data xmlns="https://example.com">
.....: <row>
.....: <shape>square</shape>
.....: <degrees>360</degrees>
.....: <sides>4.0</sides>
.....: </row>
.....: <row>
.....: <shape>circle</shape>
.....: <degrees>360</degrees>
.....: <sides/>
.....: </row>
.....: <row>
.....: <shape>triangle</shape>
.....: <degrees>180</degrees>
.....: <sides>3.0</sides>
.....: </row>
.....: </data>"""
.....:
In [385]: df = pd.read_xml(xml,
.....: xpath="//pandas:row",
.....: namespaces={"pandas": "https://example.com"})
.....:
In [386]: df
Out[386]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
However, if XPath does not reference node names such as default, /*, then
namespaces is not required.
With lxml as parser, you can flatten nested XML documents with an XSLT
script which also can be string/file/URL types. As background, XSLT is
a special-purpose language written in a special XML file that can transform
original XML documents into other XML, HTML, even text (CSV, JSON, etc.)
using an XSLT processor.
For example, consider this somewhat nested structure of Chicago “L” Rides
where station and rides elements encapsulate data in their own sections.
With below XSLT, lxml can transform original nested document into a flatter
output (as shown below for demonstration) for easier parse into DataFrame:
In [387]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station id="40850" name="Library"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="41700" name="Washington/Wabash"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="40380" name="Clark/Lake"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: </response>"""
.....:
In [388]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/response">
.....: <xsl:copy>
.....: <xsl:apply-templates select="row"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <xsl:copy>
.....: <station_id><xsl:value-of select="station/@id"/></station_id>
.....: <station_name><xsl:value-of select="station/@name"/></station_name>
.....: <xsl:copy-of select="month|rides/*"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [389]: output = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station_id>40850</station_id>
.....: <station_name>Library</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>41700</station_id>
.....: <station_name>Washington/Wabash</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>40380</station_id>
.....: <station_name>Clark/Lake</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </row>
.....: </response>"""
.....:
In [390]: df = pd.read_xml(xml, stylesheet=xsl)
In [391]: df
Out[391]:
station_id station_name ... avg_saturday_rides avg_sunday_holiday_rides
0 40850 Library ... 534.0 417.2
1 41700 Washington/Wabash ... 1909.8 1438.6
2 40380 Clark/Lake ... 1657.0 1453.8
[3 rows x 6 columns]
For very large XML files that can range in hundreds of megabytes to gigabytes, pandas.read_xml()
supports parsing such sizeable files using lxml’s iterparse and etree’s iterparse
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
without holding entire tree in memory.
New in version 1.5.0.
To use this feature, you must pass a physical XML file path into read_xml and use the iterparse argument.
Files should not be compressed or point to online sources but stored on local disk. Also, iterparse should be
a dictionary where the key is the repeating nodes in document (which become the rows) and the value is a list of
any element or attribute that is a descendant (i.e., child, grandchild) of repeating node. Since XPath is not
used in this method, descendants do not need to share same relationship with one another. Below shows example
of reading in Wikipedia’s very large (12 GB+) latest article data dump.
In [1]: df = pd.read_xml(
... "/path/to/downloaded/enwikisource-latest-pages-articles.xml",
... iterparse = {"page": ["title", "ns", "id"]}
... )
... df
Out[2]:
title ns id
0 Gettysburg Address 0 21450
1 Main Page 0 42950
2 Declaration by United Nations 0 8435
3 Constitution of the United States of America 0 8435
4 Declaration of Independence (Israel) 0 17858
... ... ... ...
3578760 Page:Black cat 1897 07 v2 n10.pdf/17 104 219649
3578761 Page:Black cat 1897 07 v2 n10.pdf/43 104 219649
3578762 Page:Black cat 1897 07 v2 n10.pdf/44 104 219649
3578763 The History of Tom Jones, a Foundling/Book IX 0 12084291
3578764 Page:Shakespeare of Stratford (1926) Yale.djvu/91 104 21450
[3578765 rows x 3 columns]
Writing XML#
New in version 1.3.0.
DataFrame objects have an instance method to_xml which renders the
contents of the DataFrame as an XML document.
Note
This method does not support special properties of XML including DTD,
CData, XSD schemas, processing instructions, comments, and others.
Only namespaces at the root level is supported. However, stylesheet
allows design changes after initial output.
Let’s look at a few examples.
Write an XML without options:
In [392]: geom_df = pd.DataFrame(
.....: {
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [393]: print(geom_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with new root and row name:
In [394]: print(geom_df.to_xml(root_name="geometry", row_name="objects"))
<?xml version='1.0' encoding='utf-8'?>
<geometry>
<objects>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</objects>
<objects>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</objects>
<objects>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</objects>
</geometry>
Write an attribute-centric XML:
In [395]: print(geom_df.to_xml(attr_cols=geom_df.columns.tolist()))
<?xml version='1.0' encoding='utf-8'?>
<data>
<row index="0" shape="square" degrees="360" sides="4.0"/>
<row index="1" shape="circle" degrees="360"/>
<row index="2" shape="triangle" degrees="180" sides="3.0"/>
</data>
Write a mix of elements and attributes:
In [396]: print(
.....: geom_df.to_xml(
.....: index=False,
.....: attr_cols=['shape'],
.....: elem_cols=['degrees', 'sides'])
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<data>
<row shape="square">
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row shape="circle">
<degrees>360</degrees>
<sides/>
</row>
<row shape="triangle">
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Any DataFrames with hierarchical columns will be flattened for XML element names
with levels delimited by underscores:
In [397]: ext_geom_df = pd.DataFrame(
.....: {
.....: "type": ["polygon", "other", "polygon"],
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [398]: pvt_df = ext_geom_df.pivot_table(index='shape',
.....: columns='type',
.....: values=['degrees', 'sides'],
.....: aggfunc='sum')
.....:
In [399]: pvt_df
Out[399]:
degrees sides
type other polygon other polygon
shape
circle 360.0 NaN 0.0 NaN
square NaN 360.0 NaN 4.0
triangle NaN 180.0 NaN 3.0
In [400]: print(pvt_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<shape>circle</shape>
<degrees_other>360.0</degrees_other>
<degrees_polygon/>
<sides_other>0.0</sides_other>
<sides_polygon/>
</row>
<row>
<shape>square</shape>
<degrees_other/>
<degrees_polygon>360.0</degrees_polygon>
<sides_other/>
<sides_polygon>4.0</sides_polygon>
</row>
<row>
<shape>triangle</shape>
<degrees_other/>
<degrees_polygon>180.0</degrees_polygon>
<sides_other/>
<sides_polygon>3.0</sides_polygon>
</row>
</data>
Write an XML with default namespace:
In [401]: print(geom_df.to_xml(namespaces={"": "https://example.com"}))
<?xml version='1.0' encoding='utf-8'?>
<data xmlns="https://example.com">
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with namespace prefix:
In [402]: print(
.....: geom_df.to_xml(namespaces={"doc": "https://example.com"},
.....: prefix="doc")
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<doc:data xmlns:doc="https://example.com">
<doc:row>
<doc:index>0</doc:index>
<doc:shape>square</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides>4.0</doc:sides>
</doc:row>
<doc:row>
<doc:index>1</doc:index>
<doc:shape>circle</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides/>
</doc:row>
<doc:row>
<doc:index>2</doc:index>
<doc:shape>triangle</doc:shape>
<doc:degrees>180</doc:degrees>
<doc:sides>3.0</doc:sides>
</doc:row>
</doc:data>
Write an XML without declaration or pretty print:
In [403]: print(
.....: geom_df.to_xml(xml_declaration=False,
.....: pretty_print=False)
.....: )
.....:
<data><row><index>0</index><shape>square</shape><degrees>360</degrees><sides>4.0</sides></row><row><index>1</index><shape>circle</shape><degrees>360</degrees><sides/></row><row><index>2</index><shape>triangle</shape><degrees>180</degrees><sides>3.0</sides></row></data>
Write an XML and transform with stylesheet:
In [404]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/data">
.....: <geometry>
.....: <xsl:apply-templates select="row"/>
.....: </geometry>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <object index="{index}">
.....: <xsl:if test="shape!='circle'">
.....: <xsl:attribute name="type">polygon</xsl:attribute>
.....: </xsl:if>
.....: <xsl:copy-of select="shape"/>
.....: <property>
.....: <xsl:copy-of select="degrees|sides"/>
.....: </property>
.....: </object>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [405]: print(geom_df.to_xml(stylesheet=xsl))
<?xml version="1.0"?>
<geometry>
<object index="0" type="polygon">
<shape>square</shape>
<property>
<degrees>360</degrees>
<sides>4.0</sides>
</property>
</object>
<object index="1">
<shape>circle</shape>
<property>
<degrees>360</degrees>
<sides/>
</property>
</object>
<object index="2" type="polygon">
<shape>triangle</shape>
<property>
<degrees>180</degrees>
<sides>3.0</sides>
</property>
</object>
</geometry>
XML Final Notes#
All XML documents adhere to W3C specifications. Both etree and lxml
parsers will fail to parse any markup document that is not well-formed or
follows XML syntax rules. Do be aware HTML is not an XML document unless it
follows XHTML specs. However, other popular markup types including KML, XAML,
RSS, MusicML, MathML are compliant XML schemas.
For above reason, if your application builds XML prior to pandas operations,
use appropriate DOM libraries like etree and lxml to build the necessary
document and not by string concatenation or regex adjustments. Always remember
XML is a special text file with markup rules.
With very large XML files (several hundred MBs to GBs), XPath and XSLT
can become memory-intensive operations. Be sure to have enough available
RAM for reading and writing to large XML files (roughly about 5 times the
size of text).
Because XSLT is a programming language, use it with caution since such scripts
can pose a security risk in your environment and can run large or infinite
recursive operations. Always test scripts on small fragments before full run.
The etree parser supports all functionality of both read_xml and
to_xml except for complex XPath and any XSLT. Though limited in features,
etree is still a reliable and capable parser and tree builder. Its
performance may trail lxml to a certain degree for larger files but
relatively unnoticeable on small to medium size files.
Excel files#
The read_excel() method can read Excel 2007+ (.xlsx) files
using the openpyxl Python module. Excel 2003 (.xls) files
can be read using xlrd. Binary Excel (.xlsb)
files can be read using pyxlsb.
The to_excel() instance method is used for
saving a DataFrame to Excel. Generally the semantics are
similar to working with csv data.
See the cookbook for some advanced strategies.
Warning
The xlwt package for writing old-style .xls
excel files is no longer maintained.
The xlrd package is now only for reading
old-style .xls files.
Before pandas 1.3.0, the default argument engine=None to read_excel()
would result in using the xlrd engine in many cases, including new
Excel 2007+ (.xlsx) files. pandas will now default to using the
openpyxl engine.
It is strongly encouraged to install openpyxl to read Excel 2007+
(.xlsx) files.
Please do not report issues when using ``xlrd`` to read ``.xlsx`` files.
This is no longer supported, switch to using openpyxl instead.
Attempting to use the xlwt engine will raise a FutureWarning
unless the option io.excel.xls.writer is set to "xlwt".
While this option is now deprecated and will also raise a FutureWarning,
it can be globally set and the warning suppressed. Users are recommended to
write .xlsx files using the openpyxl engine instead.
Reading Excel files#
In the most basic use-case, read_excel takes a path to an Excel
file, and the sheet_name indicating which sheet to parse.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", sheet_name="Sheet1")
ExcelFile class#
To facilitate working with multiple sheets from the same file, the ExcelFile
class can be used to wrap the file and can be passed into read_excel
There will be a performance benefit for reading multiple sheets as the file is
read into memory only once.
xlsx = pd.ExcelFile("path_to_file.xls")
df = pd.read_excel(xlsx, "Sheet1")
The ExcelFile class can also be used as a context manager.
with pd.ExcelFile("path_to_file.xls") as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
The sheet_names property will generate
a list of the sheet names in the file.
The primary use-case for an ExcelFile is parsing multiple sheets with
different parameters:
data = {}
# For when Sheet1's format differs from Sheet2
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=1)
Note that if the same parsing parameters are used for all sheets, a list
of sheet names can simply be passed to read_excel with no loss in performance.
# using the ExcelFile class
data = {}
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=None, na_values=["NA"])
# equivalent using the read_excel function
data = pd.read_excel(
"path_to_file.xls", ["Sheet1", "Sheet2"], index_col=None, na_values=["NA"]
)
ExcelFile can also be called with a xlrd.book.Book object
as a parameter. This allows the user to control how the excel file is read.
For example, sheets can be loaded on demand by calling xlrd.open_workbook()
with on_demand=True.
import xlrd
xlrd_book = xlrd.open_workbook("path_to_file.xls", on_demand=True)
with pd.ExcelFile(xlrd_book) as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
Specifying sheets#
Note
The second argument is sheet_name, not to be confused with ExcelFile.sheet_names.
Note
An ExcelFile’s attribute sheet_names provides access to a list of sheets.
The arguments sheet_name allows specifying the sheet or sheets to read.
The default value for sheet_name is 0, indicating to read the first sheet
Pass a string to refer to the name of a particular sheet in the workbook.
Pass an integer to refer to the index of a sheet. Indices follow Python
convention, beginning at 0.
Pass a list of either strings or integers, to return a dictionary of specified sheets.
Pass a None to return a dictionary of all available sheets.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", "Sheet1", index_col=None, na_values=["NA"])
Using the sheet index:
# Returns a DataFrame
pd.read_excel("path_to_file.xls", 0, index_col=None, na_values=["NA"])
Using all default values:
# Returns a DataFrame
pd.read_excel("path_to_file.xls")
Using None to get all sheets:
# Returns a dictionary of DataFrames
pd.read_excel("path_to_file.xls", sheet_name=None)
Using a list to get multiple sheets:
# Returns the 1st and 4th sheet, as a dictionary of DataFrames.
pd.read_excel("path_to_file.xls", sheet_name=["Sheet1", 3])
read_excel can read more than one sheet, by setting sheet_name to either
a list of sheet names, a list of sheet positions, or None to read all sheets.
Sheets can be specified by sheet index or sheet name, using an integer or string,
respectively.
Reading a MultiIndex#
read_excel can read a MultiIndex index, by passing a list of columns to index_col
and a MultiIndex column by passing a list of rows to header. If either the index
or columns have serialized level names those will be read in as well by specifying
the rows/columns that make up the levels.
For example, to read in a MultiIndex index without names:
In [406]: df = pd.DataFrame(
.....: {"a": [1, 2, 3, 4], "b": [5, 6, 7, 8]},
.....: index=pd.MultiIndex.from_product([["a", "b"], ["c", "d"]]),
.....: )
.....:
In [407]: df.to_excel("path_to_file.xlsx")
In [408]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [409]: df
Out[409]:
a b
a c 1 5
d 2 6
b c 3 7
d 4 8
If the index has level names, they will parsed as well, using the same
parameters.
In [410]: df.index = df.index.set_names(["lvl1", "lvl2"])
In [411]: df.to_excel("path_to_file.xlsx")
In [412]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [413]: df
Out[413]:
a b
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
If the source file has both MultiIndex index and columns, lists specifying each
should be passed to index_col and header:
In [414]: df.columns = pd.MultiIndex.from_product([["a"], ["b", "d"]], names=["c1", "c2"])
In [415]: df.to_excel("path_to_file.xlsx")
In [416]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1], header=[0, 1])
In [417]: df
Out[417]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Missing values in columns specified in index_col will be forward filled to
allow roundtripping with to_excel for merged_cells=True. To avoid forward
filling the missing values use set_index after reading the data instead of
index_col.
Parsing specific columns#
It is often the case that users will insert columns to do temporary computations
in Excel and you may not want to read in those columns. read_excel takes
a usecols keyword to allow you to specify a subset of columns to parse.
Changed in version 1.0.0.
Passing in an integer for usecols will no longer work. Please pass in a list
of ints from 0 to usecols inclusive instead.
You can specify a comma-delimited set of Excel columns and ranges as a string:
pd.read_excel("path_to_file.xls", "Sheet1", usecols="A,C:E")
If usecols is a list of integers, then it is assumed to be the file column
indices to be parsed.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=[0, 2, 3])
Element order is ignored, so usecols=[0, 1] is the same as [1, 0].
If usecols is a list of strings, it is assumed that each string corresponds
to a column name provided either by the user in names or inferred from the
document header row(s). Those strings define which columns will be parsed:
pd.read_excel("path_to_file.xls", "Sheet1", usecols=["foo", "bar"])
Element order is ignored, so usecols=['baz', 'joe'] is the same as ['joe', 'baz'].
If usecols is callable, the callable function will be evaluated against
the column names, returning names where the callable function evaluates to True.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=lambda x: x.isalpha())
Parsing dates#
Datetime-like values are normally automatically converted to the appropriate
dtype when reading the excel file. But if you have a column of strings that
look like dates (but are not actually formatted as dates in excel), you can
use the parse_dates keyword to parse those strings to datetimes:
pd.read_excel("path_to_file.xls", "Sheet1", parse_dates=["date_strings"])
Cell converters#
It is possible to transform the contents of Excel cells via the converters
option. For instance, to convert a column to boolean:
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyBools": bool})
This options handles missing values and treats exceptions in the converters
as missing data. Transformations are applied cell by cell rather than to the
column as a whole, so the array dtype is not guaranteed. For instance, a
column of integers with missing values cannot be transformed to an array
with integer dtype, because NaN is strictly a float. You can manually mask
missing data to recover integer dtype:
def cfun(x):
return int(x) if x else -1
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyInts": cfun})
Dtype specifications#
As an alternative to converters, the type for an entire column can
be specified using the dtype keyword, which takes a dictionary
mapping column names to types. To interpret data with
no type inference, use the type str or object.
pd.read_excel("path_to_file.xls", dtype={"MyInts": "int64", "MyText": str})
Writing Excel files#
Writing Excel files to disk#
To write a DataFrame object to a sheet of an Excel file, you can use the
to_excel instance method. The arguments are largely the same as to_csv
described above, the first argument being the name of the excel file, and the
optional second argument the name of the sheet to which the DataFrame should be
written. For example:
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Files with a .xls extension will be written using xlwt and those with a
.xlsx extension will be written using xlsxwriter (if available) or
openpyxl.
The DataFrame will be written in a way that tries to mimic the REPL output.
The index_label will be placed in the second
row instead of the first. You can place it in the first row by setting the
merge_cells option in to_excel() to False:
df.to_excel("path_to_file.xlsx", index_label="label", merge_cells=False)
In order to write separate DataFrames to separate sheets in a single Excel file,
one can pass an ExcelWriter.
with pd.ExcelWriter("path_to_file.xlsx") as writer:
df1.to_excel(writer, sheet_name="Sheet1")
df2.to_excel(writer, sheet_name="Sheet2")
Writing Excel files to memory#
pandas supports writing Excel files to buffer-like objects such as StringIO or
BytesIO using ExcelWriter.
from io import BytesIO
bio = BytesIO()
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter(bio, engine="xlsxwriter")
df.to_excel(writer, sheet_name="Sheet1")
# Save the workbook
writer.save()
# Seek to the beginning and read to copy the workbook to a variable in memory
bio.seek(0)
workbook = bio.read()
Note
engine is optional but recommended. Setting the engine determines
the version of workbook produced. Setting engine='xlrd' will produce an
Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If
omitted, an Excel 2007-formatted workbook is produced.
Excel writer engines#
Deprecated since version 1.2.0: As the xlwt package is no longer
maintained, the xlwt engine will be removed from a future version
of pandas. This is the only engine in pandas that supports writing to
.xls files.
pandas chooses an Excel writer via two methods:
the engine keyword argument
the filename extension (via the default specified in config options)
By default, pandas uses the XlsxWriter for .xlsx, openpyxl
for .xlsm, and xlwt for .xls files. If you have multiple
engines installed, you can set the default engine through setting the
config options io.excel.xlsx.writer and
io.excel.xls.writer. pandas will fall back on openpyxl for .xlsx
files if Xlsxwriter is not available.
To specify which writer you want to use, you can pass an engine keyword
argument to to_excel and to ExcelWriter. The built-in engines are:
openpyxl: version 2.4 or higher is required
xlsxwriter
xlwt
# By setting the 'engine' in the DataFrame 'to_excel()' methods.
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1", engine="xlsxwriter")
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter("path_to_file.xlsx", engine="xlsxwriter")
# Or via pandas configuration.
from pandas import options # noqa: E402
options.io.excel.xlsx.writer = "xlsxwriter"
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Style and formatting#
The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the DataFrame’s to_excel method.
float_format : Format string for floating point numbers (default None).
freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze. Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None).
Using the Xlsxwriter engine provides many options for controlling the
format of an Excel worksheet created with the to_excel method. Excellent examples can be found in the
Xlsxwriter documentation here: https://xlsxwriter.readthedocs.io/working_with_pandas.html
OpenDocument Spreadsheets#
New in version 0.25.
The read_excel() method can also read OpenDocument spreadsheets
using the odfpy module. The semantics and features for reading
OpenDocument spreadsheets match what can be done for Excel files using
engine='odf'.
# Returns a DataFrame
pd.read_excel("path_to_file.ods", engine="odf")
Note
Currently pandas only supports reading OpenDocument spreadsheets. Writing
is not implemented.
Binary Excel (.xlsb) files#
New in version 1.0.0.
The read_excel() method can also read binary Excel files
using the pyxlsb module. The semantics and features for reading
binary Excel files mostly match what can be done for Excel files using
engine='pyxlsb'. pyxlsb does not recognize datetime types
in files and will return floats instead.
# Returns a DataFrame
pd.read_excel("path_to_file.xlsb", engine="pyxlsb")
Note
Currently pandas only supports reading binary Excel files. Writing
is not implemented.
Clipboard#
A handy way to grab data is to use the read_clipboard() method,
which takes the contents of the clipboard buffer and passes them to the
read_csv method. For instance, you can copy the following text to the
clipboard (CTRL-C on many operating systems):
A B C
x 1 4 p
y 2 5 q
z 3 6 r
And then import the data directly to a DataFrame by calling:
>>> clipdf = pd.read_clipboard()
>>> clipdf
A B C
x 1 4 p
y 2 5 q
z 3 6 r
The to_clipboard method can be used to write the contents of a DataFrame to
the clipboard. Following which you can paste the clipboard contents into other
applications (CTRL-V on many operating systems). Here we illustrate writing a
DataFrame into clipboard and reading it back.
>>> df = pd.DataFrame(
... {"A": [1, 2, 3], "B": [4, 5, 6], "C": ["p", "q", "r"]}, index=["x", "y", "z"]
... )
>>> df
A B C
x 1 4 p
y 2 5 q
z 3 6 r
>>> df.to_clipboard()
>>> pd.read_clipboard()
A B C
x 1 4 p
y 2 5 q
z 3 6 r
We can see that we got the same content back, which we had earlier written to the clipboard.
Note
You may need to install xclip or xsel (with PyQt5, PyQt4 or qtpy) on Linux to use these methods.
Pickling#
All pandas objects are equipped with to_pickle methods which use Python’s
cPickle module to save data structures to disk using the pickle format.
In [418]: df
Out[418]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
In [419]: df.to_pickle("foo.pkl")
The read_pickle function in the pandas namespace can be used to load
any pickled pandas object (or any other pickled object) from file:
In [420]: pd.read_pickle("foo.pkl")
Out[420]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Warning
Loading pickled data received from untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html
Warning
read_pickle() is only guaranteed backwards compatible back to pandas version 0.20.3
Compressed pickle files#
read_pickle(), DataFrame.to_pickle() and Series.to_pickle() can read
and write compressed pickle files. The compression types of gzip, bz2, xz, zstd are supported for reading and writing.
The zip file format only supports reading and must contain only one data file
to be read.
The compression type can be an explicit parameter or be inferred from the file extension.
If ‘infer’, then use gzip, bz2, zip, xz, zstd if filename ends in '.gz', '.bz2', '.zip',
'.xz', or '.zst', respectively.
The compression parameter can also be a dict in order to pass options to the
compression protocol. It must have a 'method' key set to the name
of the compression protocol, which must be one of
{'zip', 'gzip', 'bz2', 'xz', 'zstd'}. All other key-value pairs are passed to
the underlying compression library.
In [421]: df = pd.DataFrame(
.....: {
.....: "A": np.random.randn(1000),
.....: "B": "foo",
.....: "C": pd.date_range("20130101", periods=1000, freq="s"),
.....: }
.....: )
.....:
In [422]: df
Out[422]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Using an explicit compression type:
In [423]: df.to_pickle("data.pkl.compress", compression="gzip")
In [424]: rt = pd.read_pickle("data.pkl.compress", compression="gzip")
In [425]: rt
Out[425]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Inferring compression type from the extension:
In [426]: df.to_pickle("data.pkl.xz", compression="infer")
In [427]: rt = pd.read_pickle("data.pkl.xz", compression="infer")
In [428]: rt
Out[428]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
The default is to ‘infer’:
In [429]: df.to_pickle("data.pkl.gz")
In [430]: rt = pd.read_pickle("data.pkl.gz")
In [431]: rt
Out[431]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
In [432]: df["A"].to_pickle("s1.pkl.bz2")
In [433]: rt = pd.read_pickle("s1.pkl.bz2")
In [434]: rt
Out[434]:
0 -0.828876
1 -0.110383
2 2.357598
3 -1.620073
4 0.440903
...
995 -1.177365
996 1.236988
997 0.743946
998 -0.533097
999 -0.140850
Name: A, Length: 1000, dtype: float64
Passing options to the compression protocol in order to speed up compression:
In [435]: df.to_pickle("data.pkl.gz", compression={"method": "gzip", "compresslevel": 1})
msgpack#
pandas support for msgpack has been removed in version 1.0.0. It is
recommended to use pickle instead.
Alternatively, you can also the Arrow IPC serialization format for on-the-wire
transmission of pandas objects. For documentation on pyarrow, see
here.
HDF5 (PyTables)#
HDFStore is a dict-like object which reads and writes pandas using
the high performance HDF5 format using the excellent PyTables library. See the cookbook
for some advanced strategies
Warning
pandas uses PyTables for reading and writing HDF5 files, which allows
serializing object-dtype data with pickle. Loading pickled data received from
untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html for more.
In [436]: store = pd.HDFStore("store.h5")
In [437]: print(store)
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Objects can be written to the file just like adding key-value pairs to a
dict:
In [438]: index = pd.date_range("1/1/2000", periods=8)
In [439]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [440]: df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=["A", "B", "C"])
# store.put('s', s) is an equivalent method
In [441]: store["s"] = s
In [442]: store["df"] = df
In [443]: store
Out[443]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In a current or later Python session, you can retrieve stored objects:
# store.get('df') is an equivalent method
In [444]: store["df"]
Out[444]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# dotted (attribute) access provides get as well
In [445]: store.df
Out[445]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Deletion of the object specified by the key:
# store.remove('df') is an equivalent method
In [446]: del store["df"]
In [447]: store
Out[447]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Closing a Store and using a context manager:
In [448]: store.close()
In [449]: store
Out[449]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [450]: store.is_open
Out[450]: False
# Working with, and automatically closing the store using a context manager
In [451]: with pd.HDFStore("store.h5") as store:
.....: store.keys()
.....:
Read/write API#
HDFStore supports a top-level API using read_hdf for reading and to_hdf for writing,
similar to how read_csv and to_csv work.
In [452]: df_tl = pd.DataFrame({"A": list(range(5)), "B": list(range(5))})
In [453]: df_tl.to_hdf("store_tl.h5", "table", append=True)
In [454]: pd.read_hdf("store_tl.h5", "table", where=["index>2"])
Out[454]:
A B
3 3 3
4 4 4
HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting dropna=True.
In [455]: df_with_missing = pd.DataFrame(
.....: {
.....: "col1": [0, np.nan, 2],
.....: "col2": [1, np.nan, np.nan],
.....: }
.....: )
.....:
In [456]: df_with_missing
Out[456]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [457]: df_with_missing.to_hdf("file.h5", "df_with_missing", format="table", mode="w")
In [458]: pd.read_hdf("file.h5", "df_with_missing")
Out[458]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [459]: df_with_missing.to_hdf(
.....: "file.h5", "df_with_missing", format="table", mode="w", dropna=True
.....: )
.....:
In [460]: pd.read_hdf("file.h5", "df_with_missing")
Out[460]:
col1 col2
0 0.0 1.0
2 2.0 NaN
Fixed format#
The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called
the fixed format. These types of stores are not appendable once written (though you can simply
remove them and rewrite). Nor are they queryable; they must be
retrieved in their entirety. They also do not support dataframes with non-unique column names.
The fixed format stores offer very fast writing and slightly faster reading than table stores.
This format is specified by default when using put or to_hdf or by format='fixed' or format='f'.
Warning
A fixed format will raise a TypeError if you try to retrieve using a where:
>>> pd.DataFrame(np.random.randn(10, 2)).to_hdf("test_fixed.h5", "df")
>>> pd.read_hdf("test_fixed.h5", "df", where="index>5")
TypeError: cannot pass a where specification when reading a fixed format.
this store must be selected in its entirety
Table format#
HDFStore supports another PyTables format on disk, the table
format. Conceptually a table is shaped very much like a DataFrame,
with rows and columns. A table may be appended to in the same or
other sessions. In addition, delete and query type operations are
supported. This format is specified by format='table' or format='t'
to append or put or to_hdf.
This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to
enable put/append/to_hdf to by default store in the table format.
In [461]: store = pd.HDFStore("store.h5")
In [462]: df1 = df[0:4]
In [463]: df2 = df[4:]
# append data (creates a table automatically)
In [464]: store.append("df", df1)
In [465]: store.append("df", df2)
In [466]: store
Out[466]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# select the entire object
In [467]: store.select("df")
Out[467]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# the type of stored data
In [468]: store.root.df._v_attrs.pandas_type
Out[468]: 'frame_table'
Note
You can also create a table by passing format='table' or format='t' to a put operation.
Hierarchical keys#
Keys to a store can be specified as a string. These can be in a
hierarchical path-name like format (e.g. foo/bar/bah), which will
generate a hierarchy of sub-stores (or Groups in PyTables
parlance). Keys can be specified without the leading ‘/’ and are always
absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove
everything in the sub-store and below, so be careful.
In [469]: store.put("foo/bar/bah", df)
In [470]: store.append("food/orange", df)
In [471]: store.append("food/apple", df)
In [472]: store
Out[472]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# a list of keys are returned
In [473]: store.keys()
Out[473]: ['/df', '/food/apple', '/food/orange', '/foo/bar/bah']
# remove all nodes under this level
In [474]: store.remove("food")
In [475]: store
Out[475]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
You can walk through the group hierarchy using the walk method which
will yield a tuple for each group key along with the relative keys of its contents.
In [476]: for (path, subgroups, subkeys) in store.walk():
.....: for subgroup in subgroups:
.....: print("GROUP: {}/{}".format(path, subgroup))
.....: for subkey in subkeys:
.....: key = "/".join([path, subkey])
.....: print("KEY: {}".format(key))
.....: print(store.get(key))
.....:
GROUP: /foo
KEY: /df
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
GROUP: /foo/bar
KEY: /foo/bar/bah
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Warning
Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored under the root node.
In [8]: store.foo.bar.bah
AttributeError: 'HDFStore' object has no attribute 'foo'
# you can directly access the actual PyTables node but using the root node
In [9]: store.root.foo.bar.bah
Out[9]:
/foo/bar/bah (Group) ''
children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array), 'axis1' (Array)]
Instead, use explicit string based keys:
In [477]: store["foo/bar/bah"]
Out[477]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Storing types#
Storing mixed types in a table#
Storing mixed-dtype data is supported. Strings are stored as a
fixed-width using the maximum size of the appended column. Subsequent attempts
at appending longer strings will raise a ValueError.
Passing min_itemsize={`values`: size} as a parameter to append
will set a larger minimum for the string columns. Storing floats,
strings, ints, bools, datetime64 are currently supported. For string
columns, passing nan_rep = 'nan' to append will change the default
nan representation on disk (which converts to/from np.nan), this
defaults to nan.
In [478]: df_mixed = pd.DataFrame(
.....: {
.....: "A": np.random.randn(8),
.....: "B": np.random.randn(8),
.....: "C": np.array(np.random.randn(8), dtype="float32"),
.....: "string": "string",
.....: "int": 1,
.....: "bool": True,
.....: "datetime64": pd.Timestamp("20010102"),
.....: },
.....: index=list(range(8)),
.....: )
.....:
In [479]: df_mixed.loc[df_mixed.index[3:5], ["A", "B", "string", "datetime64"]] = np.nan
In [480]: store.append("df_mixed", df_mixed, min_itemsize={"values": 50})
In [481]: df_mixed1 = store.select("df_mixed")
In [482]: df_mixed1
Out[482]:
A B C string int bool datetime64
0 1.778161 -0.898283 -0.263043 string 1 True 2001-01-02
1 -0.913867 -0.218499 -0.639244 string 1 True 2001-01-02
2 -0.030004 1.408028 -0.866305 string 1 True 2001-01-02
3 NaN NaN -0.225250 NaN 1 True NaT
4 NaN NaN -0.890978 NaN 1 True NaT
5 0.081323 0.520995 -0.553839 string 1 True 2001-01-02
6 -0.268494 0.620028 -2.762875 string 1 True 2001-01-02
7 0.168016 0.159416 -1.244763 string 1 True 2001-01-02
In [483]: df_mixed1.dtypes.value_counts()
Out[483]:
float64 2
float32 1
object 1
int64 1
bool 1
datetime64[ns] 1
dtype: int64
# we have provided a minimum string column size
In [484]: store.root.df_mixed.table
Out[484]:
/df_mixed/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(2,), dflt=0.0, pos=1),
"values_block_1": Float32Col(shape=(1,), dflt=0.0, pos=2),
"values_block_2": StringCol(itemsize=50, shape=(1,), dflt=b'', pos=3),
"values_block_3": Int64Col(shape=(1,), dflt=0, pos=4),
"values_block_4": BoolCol(shape=(1,), dflt=False, pos=5),
"values_block_5": Int64Col(shape=(1,), dflt=0, pos=6)}
byteorder := 'little'
chunkshape := (689,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
Storing MultiIndex DataFrames#
Storing MultiIndex DataFrames as tables is very similar to
storing/selecting from homogeneous index DataFrames.
In [485]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: names=["foo", "bar"],
.....: )
.....:
In [486]: df_mi = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [487]: df_mi
Out[487]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
In [488]: store.append("df_mi", df_mi)
In [489]: store.select("df_mi")
Out[489]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
# the levels are automatically included as data columns
In [490]: store.select("df_mi", "foo=bar")
Out[490]:
A B C
foo bar
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
Note
The index keyword is reserved and cannot be use as a level name.
Querying#
Querying a table#
select and delete operations have an optional criterion that can
be specified to select/delete only a subset of the data. This allows one
to have a very large on-disk table and retrieve only a portion of the
data.
A query is specified using the Term class under the hood, as a boolean expression.
index and columns are supported indexers of DataFrames.
if data_columns are specified, these can be used as additional indexers.
level name in a MultiIndex, with default name level_0, level_1, … if not provided.
Valid comparison operators are:
=, ==, !=, >, >=, <, <=
Valid boolean expressions are combined with:
| : or
& : and
( and ) : for grouping
These rules are similar to how boolean expressions are used in pandas for indexing.
Note
= will be automatically expanded to the comparison operator ==
~ is the not operator, but can only be used in very limited
circumstances
If a list/tuple of expressions is passed they will be combined via &
The following are valid expressions:
'index >= date'
"columns = ['A', 'D']"
"columns in ['A', 'D']"
'columns = A'
'columns == A'
"~(columns = ['A', 'B'])"
'index > df.index[3] & string = "bar"'
'(index > df.index[3] & index <= df.index[6]) | string = "bar"'
"ts >= Timestamp('2012-02-01')"
"major_axis>=20130101"
The indexers are on the left-hand side of the sub-expression:
columns, major_axis, ts
The right-hand side of the sub-expression (after a comparison operator) can be:
functions that will be evaluated, e.g. Timestamp('2012-02-01')
strings, e.g. "bar"
date-like, e.g. 20130101, or "20130101"
lists, e.g. "['A', 'B']"
variables that are defined in the local names space, e.g. date
Note
Passing a string to a query by interpolating it into the query
expression is not recommended. Simply assign the string of interest to a
variable and use that variable in an expression. For example, do this
string = "HolyMoly'"
store.select("df", "index == string")
instead of this
string = "HolyMoly'"
store.select('df', f'index == {string}')
The latter will not work and will raise a SyntaxError.Note that
there’s a single quote followed by a double quote in the string
variable.
If you must interpolate, use the '%r' format specifier
store.select("df", "index == %r" % string)
which will quote string.
Here are some examples:
In [491]: dfq = pd.DataFrame(
.....: np.random.randn(10, 4),
.....: columns=list("ABCD"),
.....: index=pd.date_range("20130101", periods=10),
.....: )
.....:
In [492]: store.append("dfq", dfq, format="table", data_columns=True)
Use boolean expressions, with in-line function evaluation.
In [493]: store.select("dfq", "index>pd.Timestamp('20130104') & columns=['A', 'B']")
Out[493]:
A B
2013-01-05 1.366810 1.073372
2013-01-06 2.119746 -2.628174
2013-01-07 0.337920 -0.634027
2013-01-08 1.053434 1.109090
2013-01-09 -0.772942 -0.269415
2013-01-10 0.048562 -0.285920
Use inline column reference.
In [494]: store.select("dfq", where="A>0 or C>0")
Out[494]:
A B C D
2013-01-01 0.856838 1.491776 0.001283 0.701816
2013-01-02 -1.097917 0.102588 0.661740 0.443531
2013-01-03 0.559313 -0.459055 -1.222598 -0.455304
2013-01-05 1.366810 1.073372 -0.994957 0.755314
2013-01-06 2.119746 -2.628174 -0.089460 -0.133636
2013-01-07 0.337920 -0.634027 0.421107 0.604303
2013-01-08 1.053434 1.109090 -0.367891 -0.846206
2013-01-10 0.048562 -0.285920 1.334100 0.194462
The columns keyword can be supplied to select a list of columns to be
returned, this is equivalent to passing a
'columns=list_of_columns_to_filter':
In [495]: store.select("df", "columns=['A', 'B']")
Out[495]:
A B
2000-01-01 -0.398501 -0.677311
2000-01-02 -1.167564 -0.593353
2000-01-03 -0.131959 0.089012
2000-01-04 0.169405 -1.358046
2000-01-05 0.492195 0.076693
2000-01-06 -0.285283 -1.210529
2000-01-07 0.941577 -0.342447
2000-01-08 0.052607 2.093214
start and stop parameters can be specified to limit the total search
space. These are in terms of the total number of rows in a table.
Note
select will raise a ValueError if the query expression has an unknown
variable reference. Usually this means that you are trying to select on a column
that is not a data_column.
select will raise a SyntaxError if the query expression is not valid.
Query timedelta64[ns]#
You can store and query using the timedelta64[ns] type. Terms can be
specified in the format: <float>(<unit>), where float may be signed (and fractional), and unit can be
D,s,ms,us,ns for the timedelta. Here’s an example:
In [496]: from datetime import timedelta
In [497]: dftd = pd.DataFrame(
.....: {
.....: "A": pd.Timestamp("20130101"),
.....: "B": [
.....: pd.Timestamp("20130101") + timedelta(days=i, seconds=10)
.....: for i in range(10)
.....: ],
.....: }
.....: )
.....:
In [498]: dftd["C"] = dftd["A"] - dftd["B"]
In [499]: dftd
Out[499]:
A B C
0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50
1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50
2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50
3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
In [500]: store.append("dftd", dftd, data_columns=True)
In [501]: store.select("dftd", "C<'-3.5D'")
Out[501]:
A B C
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
Query MultiIndex#
Selecting from a MultiIndex can be achieved by using the name of the level.
In [502]: df_mi.index.names
Out[502]: FrozenList(['foo', 'bar'])
In [503]: store.select("df_mi", "foo=baz and bar=two")
Out[503]:
A B C
foo bar
baz two 0.183573 0.145277 0.308146
If the MultiIndex levels names are None, the levels are automatically made available via
the level_n keyword with n the level of the MultiIndex you want to select from.
In [504]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: )
.....:
In [505]: df_mi_2 = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [506]: df_mi_2
Out[506]:
A B C
foo one -0.646538 1.210676 -0.315409
two 1.528366 0.376542 0.174490
three 1.247943 -0.742283 0.710400
bar one 0.434128 -1.246384 1.139595
two 1.388668 -0.413554 -0.666287
baz two 0.010150 -0.163820 -0.115305
three 0.216467 0.633720 0.473945
qux one -0.155446 1.287082 0.320201
two -1.256989 0.874920 0.765944
three 0.025557 -0.729782 -0.127439
In [507]: store.append("df_mi_2", df_mi_2)
# the levels are automatically included as data columns with keyword level_n
In [508]: store.select("df_mi_2", "level_0=foo and level_1=two")
Out[508]:
A B C
foo two 1.528366 0.376542 0.17449
Indexing#
You can create/modify an index for a table with create_table_index
after data is already in the table (after and append/put
operation). Creating a table index is highly encouraged. This will
speed your queries a great deal when you use a select with the
indexed dimension as the where.
Note
Indexes are automagically created on the indexables
and any data columns you specify. This behavior can be turned off by passing
index=False to append.
# we have automagically already created an index (in the first section)
In [509]: i = store.root.df.table.cols.index.index
In [510]: i.optlevel, i.kind
Out[510]: (6, 'medium')
# change an index by passing new parameters
In [511]: store.create_table_index("df", optlevel=9, kind="full")
In [512]: i = store.root.df.table.cols.index.index
In [513]: i.optlevel, i.kind
Out[513]: (9, 'full')
Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append, then recreate at the end.
In [514]: df_1 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [515]: df_2 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [516]: st = pd.HDFStore("appends.h5", mode="w")
In [517]: st.append("df", df_1, data_columns=["B"], index=False)
In [518]: st.append("df", df_2, data_columns=["B"], index=False)
In [519]: st.get_storer("df").table
Out[519]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
Then create the index when finished appending.
In [520]: st.create_table_index("df", columns=["B"], optlevel=9, kind="full")
In [521]: st.get_storer("df").table
Out[521]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
autoindex := True
colindexes := {
"B": Index(9, fullshuffle, zlib(1)).is_csi=True}
In [522]: st.close()
See here for how to create a completely-sorted-index (CSI) on an existing store.
Query via data columns#
You can designate (and index) certain columns that you want to be able
to perform queries (other than the indexable columns, which you can
always query). For instance say you want to perform this common
operation, on-disk, and return just the frame that matches this
query. You can specify data_columns = True to force all columns to
be data_columns.
In [523]: df_dc = df.copy()
In [524]: df_dc["string"] = "foo"
In [525]: df_dc.loc[df_dc.index[4:6], "string"] = np.nan
In [526]: df_dc.loc[df_dc.index[7:9], "string"] = "bar"
In [527]: df_dc["string2"] = "cool"
In [528]: df_dc.loc[df_dc.index[1:3], ["B", "C"]] = 1.0
In [529]: df_dc
Out[529]:
A B C string string2
2000-01-01 -0.398501 -0.677311 -0.874991 foo cool
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-04 0.169405 -1.358046 -0.105563 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-06 -0.285283 -1.210529 -1.408386 NaN cool
2000-01-07 0.941577 -0.342447 0.222031 foo cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# on-disk operations
In [530]: store.append("df_dc", df_dc, data_columns=["B", "C", "string", "string2"])
In [531]: store.select("df_dc", where="B > 0")
Out[531]:
A B C string string2
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# getting creative
In [532]: store.select("df_dc", "B > 0 & C > 0 & string == foo")
Out[532]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# this is in-memory version of this type of selection
In [533]: df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == "foo")]
Out[533]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# we have automagically created this index and the B/C/string/string2
# columns are stored separately as ``PyTables`` columns
In [534]: store.root.df_dc.table
Out[534]:
/df_dc/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2),
"C": Float64Col(shape=(), dflt=0.0, pos=3),
"string": StringCol(itemsize=3, shape=(), dflt=b'', pos=4),
"string2": StringCol(itemsize=4, shape=(), dflt=b'', pos=5)}
byteorder := 'little'
chunkshape := (1680,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"B": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"C": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string2": Index(6, mediumshuffle, zlib(1)).is_csi=False}
There is some performance degradation by making lots of columns into
data columns, so it is up to the user to designate these. In addition,
you cannot change data columns (nor indexables) after the first
append/put operation (Of course you can simply read in the data and
create a new table!).
Iterator#
You can pass iterator=True or chunksize=number_in_a_chunk
to select and select_as_multiple to return an iterator on the results.
The default is 50,000 rows returned in a chunk.
In [535]: for df in store.select("df", chunksize=3):
.....: print(df)
.....:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
A B C
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
A B C
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Note
You can also use the iterator with read_hdf which will open, then
automatically close the store when finished iterating.
for df in pd.read_hdf("store.h5", "df", chunksize=3):
print(df)
Note, that the chunksize keyword applies to the source rows. So if you
are doing a query, then the chunksize will subdivide the total rows in the table
and the query applied, returning an iterator on potentially unequal sized chunks.
Here is a recipe for generating a query and using it to create equal sized return
chunks.
In [536]: dfeq = pd.DataFrame({"number": np.arange(1, 11)})
In [537]: dfeq
Out[537]:
number
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
In [538]: store.append("dfeq", dfeq, data_columns=["number"])
In [539]: def chunks(l, n):
.....: return [l[i: i + n] for i in range(0, len(l), n)]
.....:
In [540]: evens = [2, 4, 6, 8, 10]
In [541]: coordinates = store.select_as_coordinates("dfeq", "number=evens")
In [542]: for c in chunks(coordinates, 2):
.....: print(store.select("dfeq", where=c))
.....:
number
1 2
3 4
number
5 6
7 8
number
9 10
Advanced queries#
Select a single column#
To retrieve a single indexable or data column, use the
method select_column. This will, for example, enable you to get the index
very quickly. These return a Series of the result, indexed by the row number.
These do not currently accept the where selector.
In [543]: store.select_column("df_dc", "index")
Out[543]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
3 2000-01-04
4 2000-01-05
5 2000-01-06
6 2000-01-07
7 2000-01-08
Name: index, dtype: datetime64[ns]
In [544]: store.select_column("df_dc", "string")
Out[544]:
0 foo
1 foo
2 foo
3 foo
4 NaN
5 NaN
6 foo
7 bar
Name: string, dtype: object
Selecting coordinates#
Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an
Int64Index of the resulting locations. These coordinates can also be passed to subsequent
where operations.
In [545]: df_coord = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [546]: store.append("df_coord", df_coord)
In [547]: c = store.select_as_coordinates("df_coord", "index > 20020101")
In [548]: c
Out[548]:
Int64Index([732, 733, 734, 735, 736, 737, 738, 739, 740, 741,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999],
dtype='int64', length=268)
In [549]: store.select("df_coord", where=c)
Out[549]:
0 1
2002-01-02 0.009035 0.921784
2002-01-03 -1.476563 -1.376375
2002-01-04 1.266731 2.173681
2002-01-05 0.147621 0.616468
2002-01-06 0.008611 2.136001
... ... ...
2002-09-22 0.781169 -0.791687
2002-09-23 -0.764810 -2.000933
2002-09-24 -0.345662 0.393915
2002-09-25 -0.116661 0.834638
2002-09-26 -1.341780 0.686366
[268 rows x 2 columns]
Selecting using a where mask#
Sometime your query can involve creating a list of rows to select. Usually this mask would
be a resulting index from an indexing operation. This example selects the months of
a datetimeindex which are 5.
In [550]: df_mask = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [551]: store.append("df_mask", df_mask)
In [552]: c = store.select_column("df_mask", "index")
In [553]: where = c[pd.DatetimeIndex(c).month == 5].index
In [554]: store.select("df_mask", where=where)
Out[554]:
0 1
2000-05-01 -0.386742 -0.977433
2000-05-02 -0.228819 0.471671
2000-05-03 0.337307 1.840494
2000-05-04 0.050249 0.307149
2000-05-05 -0.802947 -0.946730
... ... ...
2002-05-27 1.605281 1.741415
2002-05-28 -0.804450 -0.715040
2002-05-29 -0.874851 0.037178
2002-05-30 -0.161167 -1.294944
2002-05-31 -0.258463 -0.731969
[93 rows x 2 columns]
Storer object#
If you want to inspect the stored object, retrieve via
get_storer. You could use this programmatically to say get the number
of rows in an object.
In [555]: store.get_storer("df_dc").nrows
Out[555]: 8
Multiple table queries#
The methods append_to_multiple and
select_as_multiple can perform appending/selecting from
multiple tables at once. The idea is to have one table (call it the
selector table) that you index most/all of the columns, and perform your
queries. The other table(s) are data tables with an index matching the
selector table’s index. You can then perform a very fast query
on the selector table, yet get lots of data back. This method is similar to
having a very wide table, but enables more efficient queries.
The append_to_multiple method splits a given single DataFrame
into multiple tables according to d, a dictionary that maps the
table names to a list of ‘columns’ you want in that table. If None
is used in place of a list, that table will have the remaining
unspecified columns of the given DataFrame. The argument selector
defines which table is the selector table (which you can make queries from).
The argument dropna will drop rows from the input DataFrame to ensure
tables are synchronized. This means that if a row for one of the tables
being written to is entirely np.NaN, that row will be dropped from all tables.
If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES.
Remember that entirely np.Nan rows are not written to the HDFStore, so if
you choose to call dropna=False, some tables may have more rows than others,
and therefore select_as_multiple may not work or it may return unexpected
results.
In [556]: df_mt = pd.DataFrame(
.....: np.random.randn(8, 6),
.....: index=pd.date_range("1/1/2000", periods=8),
.....: columns=["A", "B", "C", "D", "E", "F"],
.....: )
.....:
In [557]: df_mt["foo"] = "bar"
In [558]: df_mt.loc[df_mt.index[1], ("A", "B")] = np.nan
# you can also create the tables individually
In [559]: store.append_to_multiple(
.....: {"df1_mt": ["A", "B"], "df2_mt": None}, df_mt, selector="df1_mt"
.....: )
.....:
In [560]: store
Out[560]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# individual tables were created
In [561]: store.select("df1_mt")
Out[561]:
A B
2000-01-01 0.079529 -1.459471
2000-01-02 NaN NaN
2000-01-03 -0.423113 2.314361
2000-01-04 0.756744 -0.792372
2000-01-05 -0.184971 0.170852
2000-01-06 0.678830 0.633974
2000-01-07 0.034973 0.974369
2000-01-08 -2.110103 0.243062
In [562]: store.select("df2_mt")
Out[562]:
C D E F foo
2000-01-01 -0.596306 -0.910022 -1.057072 -0.864360 bar
2000-01-02 0.477849 0.283128 -2.045700 -0.338206 bar
2000-01-03 -0.033100 -0.965461 -0.001079 -0.351689 bar
2000-01-04 -0.513555 -1.484776 -0.796280 -0.182321 bar
2000-01-05 -0.872407 -1.751515 0.934334 0.938818 bar
2000-01-06 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 -0.755544 0.380786 -1.634116 1.293610 bar
2000-01-08 1.453064 0.500558 -0.574475 0.694324 bar
# as a multiple
In [563]: store.select_as_multiple(
.....: ["df1_mt", "df2_mt"],
.....: where=["A>0", "B>0"],
.....: selector="df1_mt",
.....: )
.....:
Out[563]:
A B C D E F foo
2000-01-06 0.678830 0.633974 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 0.034973 0.974369 -0.755544 0.380786 -1.634116 1.293610 bar
Delete from a table#
You can delete from a table selectively by specifying a where. In
deleting rows, it is important to understand the PyTables deletes
rows by erasing the rows, then moving the following data. Thus
deleting can potentially be a very expensive operation depending on the
orientation of your data. To get optimal performance, it’s
worthwhile to have the dimension you are deleting be the first of the
indexables.
Data is ordered (on the disk) in terms of the indexables. Here’s a
simple use case. You store panel-type data, with dates in the
major_axis and ids in the minor_axis. The data is then
interleaved like this:
date_1
id_1
id_2
.
id_n
date_2
id_1
.
id_n
It should be clear that a delete operation on the major_axis will be
fairly quick, as one chunk is removed, then the following data moved. On
the other hand a delete operation on the minor_axis will be very
expensive. In this case it would almost certainly be faster to rewrite
the table using a where that selects all but the missing data.
Warning
Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files
automatically. Thus, repeatedly deleting (or removing nodes) and adding
again, WILL TEND TO INCREASE THE FILE SIZE.
To repack and clean the file, use ptrepack.
Notes & caveats#
Compression#
PyTables allows the stored data to be compressed. This applies to
all kinds of stores, not just tables. Two parameters are used to
control compression: complevel and complib.
complevel specifies if and how hard data is to be compressed.
complevel=0 and complevel=None disables compression and
0<complevel<10 enables compression.
complib specifies which compression library to use.
If nothing is specified the default library zlib is used. A
compression library usually optimizes for either good compression rates
or speed and the results will depend on the type of data. Which type of
compression to choose depends on your specific needs and data. The list
of supported compression libraries:
zlib: The default compression library.
A classic in terms of compression, achieves good compression
rates but is somewhat slow.
lzo: Fast
compression and decompression.
bzip2: Good compression rates.
blosc: Fast compression and
decompression.
Support for alternative blosc compressors:
blosc:blosclz This is the
default compressor for blosc
blosc:lz4:
A compact, very popular and fast compressor.
blosc:lz4hc:
A tweaked version of LZ4, produces better
compression ratios at the expense of speed.
blosc:snappy:
A popular compressor used in many places.
blosc:zlib: A classic;
somewhat slower than the previous ones, but
achieving better compression ratios.
blosc:zstd: An
extremely well balanced codec; it provides the best
compression ratios among the others above, and at
reasonably fast speed.
If complib is defined as something other than the listed libraries a
ValueError exception is issued.
Note
If the library specified with the complib option is missing on your platform,
compression defaults to zlib without further ado.
Enable compression for all objects within the file:
store_compressed = pd.HDFStore(
"store_compressed.h5", complevel=9, complib="blosc:blosclz"
)
Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled:
store.append("df", df, complib="zlib", complevel=5)
ptrepack#
PyTables offers better write performance when tables are compressed after
they are written, as opposed to turning on compression at the very
beginning. You can use the supplied PyTables utility
ptrepack. In addition, ptrepack can change compression levels
after the fact.
ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5
Furthermore ptrepack in.h5 out.h5 will repack the file to allow
you to reuse previously deleted space. Alternatively, one can simply
remove the file and write again, or use the copy method.
Caveats#
Warning
HDFStore is not-threadsafe for writing. The underlying
PyTables only supports concurrent reads (via threading or
processes). If you need reading and writing at the same time, you
need to serialize these operations in a single thread in a single
process. You will corrupt your data otherwise. See the (GH2397) for more information.
If you use locks to manage write access between multiple processes, you
may want to use fsync() before releasing write locks. For
convenience you can use store.flush(fsync=True) to do this for you.
Once a table is created columns (DataFrame)
are fixed; only exactly the same columns can be appended
Be aware that timezones (e.g., pytz.timezone('US/Eastern'))
are not necessarily equal across timezone versions. So if data is
localized to a specific timezone in the HDFStore using one version
of a timezone library and that data is updated with another version, the data
will be converted to UTC since these timezones are not considered
equal. Either use the same version of timezone library or use tz_convert with
the updated timezone definition.
Warning
PyTables will show a NaturalNameWarning if a column name
cannot be used as an attribute selector.
Natural identifiers contain only letters, numbers, and underscores,
and may not begin with a number.
Other identifiers cannot be used in a where clause
and are generally a bad idea.
DataTypes#
HDFStore will map an object dtype to the PyTables underlying
dtype. This means the following types are known to work:
Type
Represents missing values
floating : float64, float32, float16
np.nan
integer : int64, int32, int8, uint64,uint32, uint8
boolean
datetime64[ns]
NaT
timedelta64[ns]
NaT
categorical : see the section below
object : strings
np.nan
unicode columns are not supported, and WILL FAIL.
Categorical data#
You can write data that contains category dtypes to a HDFStore.
Queries work the same as if it was an object array. However, the category dtyped data is
stored in a more efficient manner.
In [564]: dfcat = pd.DataFrame(
.....: {"A": pd.Series(list("aabbcdba")).astype("category"), "B": np.random.randn(8)}
.....: )
.....:
In [565]: dfcat
Out[565]:
A B
0 a -1.608059
1 a 0.851060
2 b -0.736931
3 b 0.003538
4 c -1.422611
5 d 2.060901
6 b 0.993899
7 a -1.371768
In [566]: dfcat.dtypes
Out[566]:
A category
B float64
dtype: object
In [567]: cstore = pd.HDFStore("cats.h5", mode="w")
In [568]: cstore.append("dfcat", dfcat, format="table", data_columns=["A"])
In [569]: result = cstore.select("dfcat", where="A in ['b', 'c']")
In [570]: result
Out[570]:
A B
2 b -0.736931
3 b 0.003538
4 c -1.422611
6 b 0.993899
In [571]: result.dtypes
Out[571]:
A category
B float64
dtype: object
String columns#
min_itemsize
The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns.
A string column itemsize is calculated as the maximum of the
length of data (for that column) that is passed to the HDFStore, in the first append. Subsequent appends,
may introduce a string for a column larger than the column can hold, an Exception will be raised (otherwise you
could have a silent truncation of these columns, leading to loss of information). In the future we may relax this and
allow a user-specified truncation to occur.
Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key to
allow all indexables or data_columns to have this min_itemsize.
Passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.
Note
If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of any string passed
In [572]: dfs = pd.DataFrame({"A": "foo", "B": "bar"}, index=list(range(5)))
In [573]: dfs
Out[573]:
A B
0 foo bar
1 foo bar
2 foo bar
3 foo bar
4 foo bar
# A and B have a size of 30
In [574]: store.append("dfs", dfs, min_itemsize=30)
In [575]: store.get_storer("dfs").table
Out[575]:
/dfs/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=30, shape=(2,), dflt=b'', pos=1)}
byteorder := 'little'
chunkshape := (963,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
# A is created as a data_column with a size of 30
# B is size is calculated
In [576]: store.append("dfs2", dfs, min_itemsize={"A": 30})
In [577]: store.get_storer("dfs2").table
Out[577]:
/dfs2/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1),
"A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)}
byteorder := 'little'
chunkshape := (1598,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"A": Index(6, mediumshuffle, zlib(1)).is_csi=False}
nan_rep
String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to the string value nan.
You could inadvertently turn an actual nan value into a missing value.
In [578]: dfss = pd.DataFrame({"A": ["foo", "bar", "nan"]})
In [579]: dfss
Out[579]:
A
0 foo
1 bar
2 nan
In [580]: store.append("dfss", dfss)
In [581]: store.select("dfss")
Out[581]:
A
0 foo
1 bar
2 NaN
# here you need to specify a different nan rep
In [582]: store.append("dfss2", dfss, nan_rep="_nan_")
In [583]: store.select("dfss2")
Out[583]:
A
0 foo
1 bar
2 nan
External compatibility#
HDFStore writes table format objects in specific formats suitable for
producing loss-less round trips to pandas objects. For external
compatibility, HDFStore can read native PyTables format
tables.
It is possible to write an HDFStore object that can easily be imported into R using the
rhdf5 library (Package website). Create a table format store like this:
In [584]: df_for_r = pd.DataFrame(
.....: {
.....: "first": np.random.rand(100),
.....: "second": np.random.rand(100),
.....: "class": np.random.randint(0, 2, (100,)),
.....: },
.....: index=range(100),
.....: )
.....:
In [585]: df_for_r.head()
Out[585]:
first second class
0 0.013480 0.504941 0
1 0.690984 0.898188 1
2 0.510113 0.618748 1
3 0.357698 0.004972 0
4 0.451658 0.012065 1
In [586]: store_export = pd.HDFStore("export.h5")
In [587]: store_export.append("df_for_r", df_for_r, data_columns=df_dc.columns)
In [588]: store_export
Out[588]:
<class 'pandas.io.pytables.HDFStore'>
File path: export.h5
In R this file can be read into a data.frame object using the rhdf5
library. The following example function reads the corresponding column names
and data values from the values and assembles them into a data.frame:
# Load values and column names for all datasets from corresponding nodes and
# insert them into one data.frame object.
library(rhdf5)
loadhdf5data <- function(h5File) {
listing <- h5ls(h5File)
# Find all data nodes, values are stored in *_values and corresponding column
# titles in *_items
data_nodes <- grep("_values", listing$name)
name_nodes <- grep("_items", listing$name)
data_paths = paste(listing$group[data_nodes], listing$name[data_nodes], sep = "/")
name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/")
columns = list()
for (idx in seq(data_paths)) {
# NOTE: matrices returned by h5read have to be transposed to obtain
# required Fortran order!
data <- data.frame(t(h5read(h5File, data_paths[idx])))
names <- t(h5read(h5File, name_paths[idx]))
entry <- data.frame(data)
colnames(entry) <- names
columns <- append(columns, entry)
}
data <- data.frame(columns)
return(data)
}
Now you can import the DataFrame into R:
> data = loadhdf5data("transfer.hdf5")
> head(data)
first second class
1 0.4170220047 0.3266449 0
2 0.7203244934 0.5270581 0
3 0.0001143748 0.8859421 1
4 0.3023325726 0.3572698 1
5 0.1467558908 0.9085352 1
6 0.0923385948 0.6233601 1
Note
The R function lists the entire HDF5 file’s contents and assembles the
data.frame object from all matching nodes, so use this only as a
starting point if you have stored multiple DataFrame objects to a
single HDF5 file.
Performance#
tables format come with a writing performance penalty as compared to
fixed stores. The benefit is the ability to append/delete and
query (potentially very large amounts of data). Write times are
generally longer as compared with regular stores. Query times can
be quite fast, especially on an indexed axis.
You can pass chunksize=<int> to append, specifying the
write chunksize (default is 50000). This will significantly lower
your memory usage on writing.
You can pass expectedrows=<int> to the first append,
to set the TOTAL number of rows that PyTables will expect.
This will optimize read/write performance.
Duplicate rows can be written to tables, but are filtered out in
selection (with the last items being selected; thus a table is
unique on major, minor pairs)
A PerformanceWarning will be raised if you are attempting to
store types that will be pickled by PyTables (rather than stored as
endemic types). See
Here
for more information and some solutions.
Feather#
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
frames efficient, and to make sharing data across data analysis languages easy.
Feather is designed to faithfully serialize and de-serialize DataFrames, supporting all of the pandas
dtypes, including extension dtypes such as categorical and datetime with tz.
Several caveats:
The format will NOT write an Index, or MultiIndex for the
DataFrame and will raise an error if a non-default one is provided. You
can .reset_index() to store the index or .reset_index(drop=True) to
ignore it.
Duplicate column names and non-string columns names are not supported
Actual Python objects in object dtype columns are not supported. These will
raise a helpful error message on an attempt at serialization.
See the Full Documentation.
In [589]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.Categorical(list("abc")),
.....: "g": pd.date_range("20130101", periods=3),
.....: "h": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "i": pd.date_range("20130101", periods=3, freq="ns"),
.....: }
.....: )
.....:
In [590]: df
Out[590]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
In [591]: df.dtypes
Out[591]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Write to a feather file.
In [592]: df.to_feather("example.feather")
Read from a feather file.
In [593]: result = pd.read_feather("example.feather")
In [594]: result
Out[594]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
# we preserve dtypes
In [595]: result.dtypes
Out[595]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Parquet#
Apache Parquet provides a partitioned binary columnar serialization for data frames. It is designed to
make reading and writing data frames efficient, and to make sharing data across data analysis
languages easy. Parquet can use a variety of compression techniques to shrink the file size as much as possible
while still maintaining good read performance.
Parquet is designed to faithfully serialize and de-serialize DataFrame s, supporting all of the pandas
dtypes, including extension dtypes such as datetime with tz.
Several caveats.
Duplicate column names and non-string columns names are not supported.
The pyarrow engine always writes the index to the output, but fastparquet only writes non-default
indexes. This extra column can cause problems for non-pandas consumers that are not expecting it. You can
force including or omitting indexes with the index argument, regardless of the underlying engine.
Index level names, if specified, must be strings.
In the pyarrow engine, categorical dtypes for non-string types can be serialized to parquet, but will de-serialize as their primitive dtype.
The pyarrow engine preserves the ordered flag of categorical dtypes with string types. fastparquet does not preserve the ordered flag.
Non supported types include Interval and actual Python object types. These will raise a helpful error message
on an attempt at serialization. Period type is supported with pyarrow >= 0.16.0.
The pyarrow engine preserves extension data types such as the nullable integer and string data
type (requiring pyarrow >= 0.16.0, and requiring the extension type to implement the needed protocols,
see the extension types documentation).
You can specify an engine to direct the serialization. This can be one of pyarrow, or fastparquet, or auto.
If the engine is NOT specified, then the pd.options.io.parquet.engine option is checked; if this is also auto,
then pyarrow is tried, and falling back to fastparquet.
See the documentation for pyarrow and fastparquet.
Note
These engines are very similar and should read/write nearly identical parquet format files.
pyarrow>=8.0.0 supports timedelta data, fastparquet>=0.1.4 supports timezone aware datetimes.
These libraries differ by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library).
In [596]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.date_range("20130101", periods=3),
.....: "g": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "h": pd.Categorical(list("abc")),
.....: "i": pd.Categorical(list("abc"), ordered=True),
.....: }
.....: )
.....:
In [597]: df
Out[597]:
a b c d e f g h i
0 a 1 3 4.0 True 2013-01-01 2013-01-01 00:00:00-05:00 a a
1 b 2 4 5.0 False 2013-01-02 2013-01-02 00:00:00-05:00 b b
2 c 3 5 6.0 True 2013-01-03 2013-01-03 00:00:00-05:00 c c
In [598]: df.dtypes
Out[598]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Write to a parquet file.
In [599]: df.to_parquet("example_pa.parquet", engine="pyarrow")
In [600]: df.to_parquet("example_fp.parquet", engine="fastparquet")
Read from a parquet file.
In [601]: result = pd.read_parquet("example_fp.parquet", engine="fastparquet")
In [602]: result = pd.read_parquet("example_pa.parquet", engine="pyarrow")
In [603]: result.dtypes
Out[603]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Read only certain columns of a parquet file.
In [604]: result = pd.read_parquet(
.....: "example_fp.parquet",
.....: engine="fastparquet",
.....: columns=["a", "b"],
.....: )
.....:
In [605]: result = pd.read_parquet(
.....: "example_pa.parquet",
.....: engine="pyarrow",
.....: columns=["a", "b"],
.....: )
.....:
In [606]: result.dtypes
Out[606]:
a object
b int64
dtype: object
Handling indexes#
Serializing a DataFrame to parquet may include the implicit index as one or
more columns in the output file. Thus, this code:
In [607]: df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
In [608]: df.to_parquet("test.parquet", engine="pyarrow")
creates a parquet file with three columns if you use pyarrow for serialization:
a, b, and __index_level_0__. If you’re using fastparquet, the
index may or may not
be written to the file.
This unexpected extra column causes some databases like Amazon Redshift to reject
the file, because that column doesn’t exist in the target table.
If you want to omit a dataframe’s indexes when writing, pass index=False to
to_parquet():
In [609]: df.to_parquet("test.parquet", index=False)
This creates a parquet file with just the two expected columns, a and b.
If your DataFrame has a custom index, you won’t get it back when you load
this file into a DataFrame.
Passing index=True will always write the index, even if that’s not the
underlying engine’s default behavior.
Partitioning Parquet files#
Parquet supports partitioning of data based on the values of one or more columns.
In [610]: df = pd.DataFrame({"a": [0, 0, 1, 1], "b": [0, 1, 0, 1]})
In [611]: df.to_parquet(path="test", engine="pyarrow", partition_cols=["a"], compression=None)
The path specifies the parent directory to which data will be saved.
The partition_cols are the column names by which the dataset will be partitioned.
Columns are partitioned in the order they are given. The partition splits are
determined by the unique values in the partition columns.
The above example creates a partitioned dataset that may look like:
test
├── a=0
│ ├── 0bac803e32dc42ae83fddfd029cbdebc.parquet
│ └── ...
└── a=1
├── e6ab24a4f45147b49b54a662f0c412a3.parquet
└── ...
ORC#
New in version 1.0.0.
Similar to the parquet format, the ORC Format is a binary columnar serialization
for data frames. It is designed to make reading data frames efficient. pandas provides both the reader and the writer for the
ORC format, read_orc() and to_orc(). This requires the pyarrow library.
Warning
It is highly recommended to install pyarrow using conda due to some issues occurred by pyarrow.
to_orc() requires pyarrow>=7.0.0.
read_orc() and to_orc() are not supported on Windows yet, you can find valid environments on install optional dependencies.
For supported dtypes please refer to supported ORC features in Arrow.
Currently timezones in datetime columns are not preserved when a dataframe is converted into ORC files.
In [612]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(4.0, 7.0, dtype="float64"),
.....: "d": [True, False, True],
.....: "e": pd.date_range("20130101", periods=3),
.....: }
.....: )
.....:
In [613]: df
Out[613]:
a b c d e
0 a 1 4.0 True 2013-01-01
1 b 2 5.0 False 2013-01-02
2 c 3 6.0 True 2013-01-03
In [614]: df.dtypes
Out[614]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Write to an orc file.
In [615]: df.to_orc("example_pa.orc", engine="pyarrow")
Read from an orc file.
In [616]: result = pd.read_orc("example_pa.orc")
In [617]: result.dtypes
Out[617]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Read only certain columns of an orc file.
In [618]: result = pd.read_orc(
.....: "example_pa.orc",
.....: columns=["a", "b"],
.....: )
.....:
In [619]: result.dtypes
Out[619]:
a object
b int64
dtype: object
SQL queries#
The pandas.io.sql module provides a collection of query wrappers to both
facilitate data retrieval and to reduce dependency on DB-specific API. Database abstraction
is provided by SQLAlchemy if installed. In addition you will need a driver library for
your database. Examples of such drivers are psycopg2
for PostgreSQL or pymysql for MySQL.
For SQLite this is
included in Python’s standard library by default.
You can find an overview of supported drivers for each SQL dialect in the
SQLAlchemy docs.
If SQLAlchemy is not installed, a fallback is only provided for sqlite (and
for mysql for backwards compatibility, but this is deprecated and will be
removed in a future version).
This mode requires a Python database adapter which respect the Python
DB-API.
See also some cookbook examples for some advanced strategies.
The key functions are:
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Note
The function read_sql() is a convenience wrapper around
read_sql_table() and read_sql_query() (and for
backward compatibility) and will delegate to specific function depending on
the provided input (database table name or sql query).
Table names do not need to be quoted if they have special characters.
In the following example, we use the SQlite SQL database
engine. You can use a temporary SQLite database where data are stored in
“memory”.
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
For more information on create_engine() and the URI formatting, see the examples
below and the SQLAlchemy documentation
In [620]: from sqlalchemy import create_engine
# Create your engine.
In [621]: engine = create_engine("sqlite:///:memory:")
If you want to manage your own connections you can pass one of those instead. The example below opens a
connection to the database using a Python context manager that automatically closes the connection after
the block has completed.
See the SQLAlchemy docs
for an explanation of how the database connection is handled.
with engine.connect() as conn, conn.begin():
data = pd.read_sql_table("data", conn)
Warning
When you open a connection to a database you are also responsible for closing it.
Side effects of leaving a connection open may include locking the database or
other breaking behaviour.
Writing DataFrames#
Assuming the following data is in a DataFrame data, we can insert it into
the database using to_sql().
id
Date
Col_1
Col_2
Col_3
26
2012-10-18
X
25.7
True
42
2012-10-19
Y
-12.4
False
63
2012-10-20
Z
5.73
True
In [622]: import datetime
In [623]: c = ["id", "Date", "Col_1", "Col_2", "Col_3"]
In [624]: d = [
.....: (26, datetime.datetime(2010, 10, 18), "X", 27.5, True),
.....: (42, datetime.datetime(2010, 10, 19), "Y", -12.5, False),
.....: (63, datetime.datetime(2010, 10, 20), "Z", 5.73, True),
.....: ]
.....:
In [625]: data = pd.DataFrame(d, columns=c)
In [626]: data
Out[626]:
id Date Col_1 Col_2 Col_3
0 26 2010-10-18 X 27.50 True
1 42 2010-10-19 Y -12.50 False
2 63 2010-10-20 Z 5.73 True
In [627]: data.to_sql("data", engine)
Out[627]: 3
With some databases, writing large DataFrames can result in errors due to
packet size limitations being exceeded. This can be avoided by setting the
chunksize parameter when calling to_sql. For example, the following
writes data to the database in batches of 1000 rows at a time:
In [628]: data.to_sql("data_chunked", engine, chunksize=1000)
Out[628]: 3
SQL data types#
to_sql() will try to map your data to an appropriate
SQL data type based on the dtype of the data. When you have columns of dtype
object, pandas will try to infer the data type.
You can always override the default type by specifying the desired SQL type of
any of the columns by using the dtype argument. This argument needs a
dictionary mapping column names to SQLAlchemy types (or strings for the sqlite3
fallback mode).
For example, specifying to use the sqlalchemy String type instead of the
default Text type for string columns:
In [629]: from sqlalchemy.types import String
In [630]: data.to_sql("data_dtype", engine, dtype={"Col_1": String})
Out[630]: 3
Note
Due to the limited support for timedelta’s in the different database
flavors, columns with type timedelta64 will be written as integer
values as nanoseconds to the database and a warning will be raised.
Note
Columns of category dtype will be converted to the dense representation
as you would get with np.asarray(categorical) (e.g. for string categories
this gives an array of strings).
Because of this, reading the database table back in does not generate
a categorical.
Datetime data types#
Using SQLAlchemy, to_sql() is capable of writing
datetime data that is timezone naive or timezone aware. However, the resulting
data stored in the database ultimately depends on the supported data type
for datetime data of the database system being used.
The following table lists supported data types for datetime data for some
common databases. Other database dialects may have different data types for
datetime data.
Database
SQL Datetime Types
Timezone Support
SQLite
TEXT
No
MySQL
TIMESTAMP or DATETIME
No
PostgreSQL
TIMESTAMP or TIMESTAMP WITH TIME ZONE
Yes
When writing timezone aware data to databases that do not support timezones,
the data will be written as timezone naive timestamps that are in local time
with respect to the timezone.
read_sql_table() is also capable of reading datetime data that is
timezone aware or naive. When reading TIMESTAMP WITH TIME ZONE types, pandas
will convert the data to UTC.
Insertion method#
The parameter method controls the SQL insertion clause used.
Possible values are:
None: Uses standard SQL INSERT clause (one per row).
'multi': Pass multiple values in a single INSERT clause.
It uses a special SQL syntax not supported by all backends.
This usually provides better performance for analytic databases
like Presto and Redshift, but has worse performance for
traditional SQL backend if the table contains many columns.
For more information check the SQLAlchemy documentation.
callable with signature (pd_table, conn, keys, data_iter):
This can be used to implement a more performant insertion method based on
specific backend dialect features.
Example of a callable using PostgreSQL COPY clause:
# Alternative to_sql() *method* for DBs that support COPY FROM
import csv
from io import StringIO
def psql_insert_copy(table, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
# gets a DBAPI connection that can provide a cursor
dbapi_conn = conn.connection
with dbapi_conn.cursor() as cur:
s_buf = StringIO()
writer = csv.writer(s_buf)
writer.writerows(data_iter)
s_buf.seek(0)
columns = ', '.join(['"{}"'.format(k) for k in keys])
if table.schema:
table_name = '{}.{}'.format(table.schema, table.name)
else:
table_name = table.name
sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format(
table_name, columns)
cur.copy_expert(sql=sql, file=s_buf)
Reading tables#
read_sql_table() will read a database table given the
table name and optionally a subset of columns to read.
Note
In order to use read_sql_table(), you must have the
SQLAlchemy optional dependency installed.
In [631]: pd.read_sql_table("data", engine)
Out[631]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
Note
Note that pandas infers column dtypes from query outputs, and not by looking
up data types in the physical database schema. For example, assume userid
is an integer column in a table. Then, intuitively, select userid ... will
return integer-valued series, while select cast(userid as text) ... will
return object-valued (str) series. Accordingly, if the query output is empty,
then all resulting columns will be returned as object-valued (since they are
most general). If you foresee that your query will sometimes generate an empty
result, you may want to explicitly typecast afterwards to ensure dtype
integrity.
You can also specify the name of the column as the DataFrame index,
and specify a subset of columns to be read.
In [632]: pd.read_sql_table("data", engine, index_col="id")
Out[632]:
index Date Col_1 Col_2 Col_3
id
26 0 2010-10-18 X 27.50 True
42 1 2010-10-19 Y -12.50 False
63 2 2010-10-20 Z 5.73 True
In [633]: pd.read_sql_table("data", engine, columns=["Col_1", "Col_2"])
Out[633]:
Col_1 Col_2
0 X 27.50
1 Y -12.50
2 Z 5.73
And you can explicitly force columns to be parsed as dates:
In [634]: pd.read_sql_table("data", engine, parse_dates=["Date"])
Out[634]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
If needed you can explicitly specify a format string, or a dict of arguments
to pass to pandas.to_datetime():
pd.read_sql_table("data", engine, parse_dates={"Date": "%Y-%m-%d"})
pd.read_sql_table(
"data",
engine,
parse_dates={"Date": {"format": "%Y-%m-%d %H:%M:%S"}},
)
You can check if a table exists using has_table()
Schema support#
Reading from and writing to different schema’s is supported through the schema
keyword in the read_sql_table() and to_sql()
functions. Note however that this depends on the database flavor (sqlite does not
have schema’s). For example:
df.to_sql("table", engine, schema="other_schema")
pd.read_sql_table("table", engine, schema="other_schema")
Querying#
You can query using raw SQL in the read_sql_query() function.
In this case you must use the SQL variant appropriate for your database.
When using SQLAlchemy, you can also pass SQLAlchemy Expression language constructs,
which are database-agnostic.
In [635]: pd.read_sql_query("SELECT * FROM data", engine)
Out[635]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.50 1
1 1 42 2010-10-19 00:00:00.000000 Y -12.50 0
2 2 63 2010-10-20 00:00:00.000000 Z 5.73 1
Of course, you can specify a more “complex” query.
In [636]: pd.read_sql_query("SELECT id, Col_1, Col_2 FROM data WHERE id = 42;", engine)
Out[636]:
id Col_1 Col_2
0 42 Y -12.5
The read_sql_query() function supports a chunksize argument.
Specifying this will return an iterator through chunks of the query result:
In [637]: df = pd.DataFrame(np.random.randn(20, 3), columns=list("abc"))
In [638]: df.to_sql("data_chunks", engine, index=False)
Out[638]: 20
In [639]: for chunk in pd.read_sql_query("SELECT * FROM data_chunks", engine, chunksize=5):
.....: print(chunk)
.....:
a b c
0 0.070470 0.901320 0.937577
1 0.295770 1.420548 -0.005283
2 -1.518598 -0.730065 0.226497
3 -2.061465 0.632115 0.853619
4 2.719155 0.139018 0.214557
a b c
0 -1.538924 -0.366973 -0.748801
1 -0.478137 -1.559153 -3.097759
2 -2.320335 -0.221090 0.119763
3 0.608228 1.064810 -0.780506
4 -2.736887 0.143539 1.170191
a b c
0 -1.573076 0.075792 -1.722223
1 -0.774650 0.803627 0.221665
2 0.584637 0.147264 1.057825
3 -0.284136 0.912395 1.552808
4 0.189376 -0.109830 0.539341
a b c
0 0.592591 -0.155407 -1.356475
1 0.833837 1.524249 1.606722
2 -0.029487 -0.051359 1.700152
3 0.921484 -0.926347 0.979818
4 0.182380 -0.186376 0.049820
You can also run a plain query without creating a DataFrame with
execute(). This is useful for queries that don’t return values,
such as INSERT. This is functionally equivalent to calling execute on the
SQLAlchemy engine or db connection object. Again, you must use the SQL syntax
variant appropriate for your database.
from pandas.io import sql
sql.execute("SELECT * FROM table_name", engine)
sql.execute(
"INSERT INTO table_name VALUES(?, ?, ?)", engine, params=[("id", 1, 12.2, True)]
)
Engine connection examples#
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
from sqlalchemy import create_engine
engine = create_engine("postgresql://scott:[email protected]:5432/mydatabase")
engine = create_engine("mysql+mysqldb://scott:[email protected]/foo")
engine = create_engine("oracle://scott:[email protected]:1521/sidname")
engine = create_engine("mssql+pyodbc://mydsn")
# sqlite://<nohostname>/<path>
# where <path> is relative:
engine = create_engine("sqlite:///foo.db")
# or absolute, starting with a slash:
engine = create_engine("sqlite:////absolute/path/to/foo.db")
For more information see the examples the SQLAlchemy documentation
Advanced SQLAlchemy queries#
You can use SQLAlchemy constructs to describe your query.
Use sqlalchemy.text() to specify query parameters in a backend-neutral way
In [640]: import sqlalchemy as sa
In [641]: pd.read_sql(
.....: sa.text("SELECT * FROM data where Col_1=:col1"), engine, params={"col1": "X"}
.....: )
.....:
Out[641]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.5 1
If you have an SQLAlchemy description of your database you can express where conditions using SQLAlchemy expressions
In [642]: metadata = sa.MetaData()
In [643]: data_table = sa.Table(
.....: "data",
.....: metadata,
.....: sa.Column("index", sa.Integer),
.....: sa.Column("Date", sa.DateTime),
.....: sa.Column("Col_1", sa.String),
.....: sa.Column("Col_2", sa.Float),
.....: sa.Column("Col_3", sa.Boolean),
.....: )
.....:
In [644]: pd.read_sql(sa.select([data_table]).where(data_table.c.Col_3 is True), engine)
Out[644]:
Empty DataFrame
Columns: [index, Date, Col_1, Col_2, Col_3]
Index: []
You can combine SQLAlchemy expressions with parameters passed to read_sql() using sqlalchemy.bindparam()
In [645]: import datetime as dt
In [646]: expr = sa.select([data_table]).where(data_table.c.Date > sa.bindparam("date"))
In [647]: pd.read_sql(expr, engine, params={"date": dt.datetime(2010, 10, 18)})
Out[647]:
index Date Col_1 Col_2 Col_3
0 1 2010-10-19 Y -12.50 False
1 2 2010-10-20 Z 5.73 True
Sqlite fallback#
The use of sqlite is supported without using SQLAlchemy.
This mode requires a Python database adapter which respect the Python
DB-API.
You can create connections like so:
import sqlite3
con = sqlite3.connect(":memory:")
And then issue the following queries:
data.to_sql("data", con)
pd.read_sql_query("SELECT * FROM data", con)
Google BigQuery#
Warning
Starting in 0.20.0, pandas has split off Google BigQuery support into the
separate package pandas-gbq. You can pip install pandas-gbq to get it.
The pandas-gbq package provides functionality to read/write from Google BigQuery.
pandas integrates with this external package. if pandas-gbq is installed, you can
use the pandas methods pd.read_gbq and DataFrame.to_gbq, which will call the
respective functions from pandas-gbq.
Full documentation can be found here.
Stata format#
Writing to stata format#
The method to_stata() will write a DataFrame
into a .dta file. The format version of this file is always 115 (Stata 12).
In [648]: df = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [649]: df.to_stata("stata.dta")
Stata data files have limited data type support; only strings with
244 or fewer characters, int8, int16, int32, float32
and float64 can be stored in .dta files. Additionally,
Stata reserves certain values to represent missing data. Exporting a
non-missing value that is outside of the permitted range in Stata for
a particular data type will retype the variable to the next larger
size. For example, int8 values are restricted to lie between -127
and 100 in Stata, and so variables with values above 100 will trigger
a conversion to int16. nan values in floating points data
types are stored as the basic missing data type (. in Stata).
Note
It is not possible to export missing data values for integer data types.
The Stata writer gracefully handles other data types including int64,
bool, uint8, uint16, uint32 by casting to
the smallest supported type that can represent the data. For example, data
with a type of uint8 will be cast to int8 if all values are less than
100 (the upper bound for non-missing int8 data in Stata), or, if values are
outside of this range, the variable is cast to int16.
Warning
Conversion from int64 to float64 may result in a loss of precision
if int64 values are larger than 2**53.
Warning
StataWriter and
to_stata() only support fixed width
strings containing up to 244 characters, a limitation imposed by the version
115 dta file format. Attempting to write Stata dta files with strings
longer than 244 characters raises a ValueError.
Reading from Stata format#
The top-level function read_stata will read a dta file and return
either a DataFrame or a StataReader that can
be used to read the file incrementally.
In [650]: pd.read_stata("stata.dta")
Out[650]:
index A B
0 0 -1.690072 0.405144
1 1 -1.511309 -1.531396
2 2 0.572698 -1.106845
3 3 -1.185859 0.174564
4 4 0.603797 -1.796129
5 5 -0.791679 1.173795
6 6 -0.277710 1.859988
7 7 -0.258413 1.251808
8 8 1.443262 0.441553
9 9 1.168163 -2.054946
Specifying a chunksize yields a
StataReader instance that can be used to
read chunksize lines from the file at a time. The StataReader
object can be used as an iterator.
In [651]: with pd.read_stata("stata.dta", chunksize=3) as reader:
.....: for df in reader:
.....: print(df.shape)
.....:
(3, 3)
(3, 3)
(3, 3)
(1, 3)
For more fine-grained control, use iterator=True and specify
chunksize with each call to
read().
In [652]: with pd.read_stata("stata.dta", iterator=True) as reader:
.....: chunk1 = reader.read(5)
.....: chunk2 = reader.read(5)
.....:
Currently the index is retrieved as a column.
The parameter convert_categoricals indicates whether value labels should be
read and used to create a Categorical variable from them. Value labels can
also be retrieved by the function value_labels, which requires read()
to be called before use.
The parameter convert_missing indicates whether missing value
representations in Stata should be preserved. If False (the default),
missing values are represented as np.nan. If True, missing values are
represented using StataMissingValue objects, and columns containing missing
values will have object data type.
Note
read_stata() and
StataReader support .dta formats 113-115
(Stata 10-12), 117 (Stata 13), and 118 (Stata 14).
Note
Setting preserve_dtypes=False will upcast to the standard pandas data types:
int64 for all integer types and float64 for floating point data. By default,
the Stata data types are preserved when importing.
Categorical data#
Categorical data can be exported to Stata data files as value labeled data.
The exported data consists of the underlying category codes as integer data values
and the categories as value labels. Stata does not have an explicit equivalent
to a Categorical and information about whether the variable is ordered
is lost when exporting.
Warning
Stata only supports string value labels, and so str is called on the
categories when exporting data. Exporting Categorical variables with
non-string categories produces a warning, and can result a loss of
information if the str representations of the categories are not unique.
Labeled data can similarly be imported from Stata data files as Categorical
variables using the keyword argument convert_categoricals (True by default).
The keyword argument order_categoricals (True by default) determines
whether imported Categorical variables are ordered.
Note
When importing categorical data, the values of the variables in the Stata
data file are not preserved since Categorical variables always
use integer data types between -1 and n-1 where n is the number
of categories. If the original values in the Stata data file are required,
these can be imported by setting convert_categoricals=False, which will
import original data (but not the variable labels). The original values can
be matched to the imported categorical data since there is a simple mapping
between the original Stata data values and the category codes of imported
Categorical variables: missing values are assigned code -1, and the
smallest original value is assigned 0, the second smallest is assigned
1 and so on until the largest original value is assigned the code n-1.
Note
Stata supports partially labeled series. These series have value labels for
some but not all data values. Importing a partially labeled series will produce
a Categorical with string categories for the values that are labeled and
numeric categories for values with no label.
SAS formats#
The top-level function read_sas() can read (but not write) SAS
XPORT (.xpt) and (since v0.18.0) SAS7BDAT (.sas7bdat) format files.
SAS files only contain two value types: ASCII text and floating point
values (usually 8 bytes but sometimes truncated). For xport files,
there is no automatic type conversion to integers, dates, or
categoricals. For SAS7BDAT files, the format codes may allow date
variables to be automatically converted to dates. By default the
whole file is read and returned as a DataFrame.
Specify a chunksize or use iterator=True to obtain reader
objects (XportReader or SAS7BDATReader) for incrementally
reading the file. The reader objects also have attributes that
contain additional information about the file and its variables.
Read a SAS7BDAT file:
df = pd.read_sas("sas_data.sas7bdat")
Obtain an iterator and read an XPORT file 100,000 lines at a time:
def do_something(chunk):
pass
with pd.read_sas("sas_xport.xpt", chunk=100000) as rdr:
for chunk in rdr:
do_something(chunk)
The specification for the xport file format is available from the SAS
web site.
No official documentation is available for the SAS7BDAT format.
SPSS formats#
New in version 0.25.0.
The top-level function read_spss() can read (but not write) SPSS
SAV (.sav) and ZSAV (.zsav) format files.
SPSS files contain column names. By default the
whole file is read, categorical columns are converted into pd.Categorical,
and a DataFrame with all columns is returned.
Specify the usecols parameter to obtain a subset of columns. Specify convert_categoricals=False
to avoid converting categorical columns into pd.Categorical.
Read an SPSS file:
df = pd.read_spss("spss_data.sav")
Extract a subset of columns contained in usecols from an SPSS file and
avoid converting categorical columns into pd.Categorical:
df = pd.read_spss(
"spss_data.sav",
usecols=["foo", "bar"],
convert_categoricals=False,
)
More information about the SAV and ZSAV file formats is available here.
Other file formats#
pandas itself only supports IO with a limited set of file formats that map
cleanly to its tabular data model. For reading and writing other file formats
into and from pandas, we recommend these packages from the broader community.
netCDF#
xarray provides data structures inspired by the pandas DataFrame for working
with multi-dimensional datasets, with a focus on the netCDF file format and
easy conversion to and from pandas.
Performance considerations#
This is an informal comparison of various IO methods, using pandas
0.24.2. Timings are machine dependent and small differences should be
ignored.
In [1]: sz = 1000000
In [2]: df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz})
In [3]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 2 columns):
A 1000000 non-null float64
B 1000000 non-null int64
dtypes: float64(1), int64(1)
memory usage: 15.3 MB
The following test functions will be used below to compare the performance of several IO methods:
import numpy as np
import os
sz = 1000000
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
sz = 1000000
np.random.seed(42)
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
def test_sql_write(df):
if os.path.exists("test.sql"):
os.remove("test.sql")
sql_db = sqlite3.connect("test.sql")
df.to_sql(name="test_table", con=sql_db)
sql_db.close()
def test_sql_read():
sql_db = sqlite3.connect("test.sql")
pd.read_sql_query("select * from test_table", sql_db)
sql_db.close()
def test_hdf_fixed_write(df):
df.to_hdf("test_fixed.hdf", "test", mode="w")
def test_hdf_fixed_read():
pd.read_hdf("test_fixed.hdf", "test")
def test_hdf_fixed_write_compress(df):
df.to_hdf("test_fixed_compress.hdf", "test", mode="w", complib="blosc")
def test_hdf_fixed_read_compress():
pd.read_hdf("test_fixed_compress.hdf", "test")
def test_hdf_table_write(df):
df.to_hdf("test_table.hdf", "test", mode="w", format="table")
def test_hdf_table_read():
pd.read_hdf("test_table.hdf", "test")
def test_hdf_table_write_compress(df):
df.to_hdf(
"test_table_compress.hdf", "test", mode="w", complib="blosc", format="table"
)
def test_hdf_table_read_compress():
pd.read_hdf("test_table_compress.hdf", "test")
def test_csv_write(df):
df.to_csv("test.csv", mode="w")
def test_csv_read():
pd.read_csv("test.csv", index_col=0)
def test_feather_write(df):
df.to_feather("test.feather")
def test_feather_read():
pd.read_feather("test.feather")
def test_pickle_write(df):
df.to_pickle("test.pkl")
def test_pickle_read():
pd.read_pickle("test.pkl")
def test_pickle_write_compress(df):
df.to_pickle("test.pkl.compress", compression="xz")
def test_pickle_read_compress():
pd.read_pickle("test.pkl.compress", compression="xz")
def test_parquet_write(df):
df.to_parquet("test.parquet")
def test_parquet_read():
pd.read_parquet("test.parquet")
When writing, the top three functions in terms of speed are test_feather_write, test_hdf_fixed_write and test_hdf_fixed_write_compress.
In [4]: %timeit test_sql_write(df)
3.29 s ± 43.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [5]: %timeit test_hdf_fixed_write(df)
19.4 ms ± 560 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: %timeit test_hdf_fixed_write_compress(df)
19.6 ms ± 308 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [7]: %timeit test_hdf_table_write(df)
449 ms ± 5.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: %timeit test_hdf_table_write_compress(df)
448 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [9]: %timeit test_csv_write(df)
3.66 s ± 26.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [10]: %timeit test_feather_write(df)
9.75 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [11]: %timeit test_pickle_write(df)
30.1 ms ± 229 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [12]: %timeit test_pickle_write_compress(df)
4.29 s ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [13]: %timeit test_parquet_write(df)
67.6 ms ± 706 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
When reading, the top three functions in terms of speed are test_feather_read, test_pickle_read and
test_hdf_fixed_read.
In [14]: %timeit test_sql_read()
1.77 s ± 17.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: %timeit test_hdf_fixed_read()
19.4 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [16]: %timeit test_hdf_fixed_read_compress()
19.5 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [17]: %timeit test_hdf_table_read()
38.6 ms ± 857 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [18]: %timeit test_hdf_table_read_compress()
38.8 ms ± 1.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [19]: %timeit test_csv_read()
452 ms ± 9.04 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [20]: %timeit test_feather_read()
12.4 ms ± 99.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: %timeit test_pickle_read()
18.4 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: %timeit test_pickle_read_compress()
915 ms ± 7.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [23]: %timeit test_parquet_read()
24.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The files test.pkl.compress, test.parquet and test.feather took the least space on disk (in bytes).
29519500 Oct 10 06:45 test.csv
16000248 Oct 10 06:45 test.feather
8281983 Oct 10 06:49 test.parquet
16000857 Oct 10 06:47 test.pkl
7552144 Oct 10 06:48 test.pkl.compress
34816000 Oct 10 06:42 test.sql
24009288 Oct 10 06:43 test_fixed.hdf
24009288 Oct 10 06:43 test_fixed_compress.hdf
24458940 Oct 10 06:44 test_table.hdf
24458940 Oct 10 06:44 test_table_compress.hdf
| 64
| 493
|
How do I use pandas to open file and process each row for that file?
So I have a dataframe containing image filenames and multiple (X, Y) coordinates for each image, like this::
file
x
y
File1
1
2
File1
103
6
File2
6
3
File2
4
9
I want to open each file and draw the points points on it.
I want to open each file in the file column once only. (I know how to open and draw on the file, I am asking how to loop through the dataframe correctly). Then process each line for that file.
I know I can use perhaps df['file'].unique() but don't know where to go from that.
I can probably work this out naively but is there a clever pandas way of doing it? I imagine it may have been asked but I can't figure out how to search for it.
|
60,202,370
|
Reverse all the values in column from highest values to lowest values and vice versa in pandas
|
<pre><code> column column1
0 23 12
1 12 23
2 14 21
3 17 20
4 21 14
5 18 18
6 20 17
</code></pre>
<p>I want the highest value of the <strong>column</strong> to be replaced with the lowest value, 2nd highest with 2nd lowest like this for all values till the lowest value replaced with the highest value of the column as shown above in <strong>Expected Output</strong>. How to do that in pandas?</p>
| 60,202,492
| 2020-02-13T07:17:48.073000
| 2
| null | 0
| 76
|
python|pandas
|
<p>You can replace sorted values with reverse sorted values:</p>
<pre><code>>>> df['column2'] = df.column.replace(df.column.sort_values().values,
df.column.sort_values(ascending=False))
>>> df
column column2
0 23 12
1 12 23
2 14 21
3 17 20
4 21 14
5 18 18
6 20 17
</code></pre>
| 2020-02-13T07:25:58.620000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.transpose.html
|
pandas.DataFrame.transpose#
pandas.DataFrame.transpose#
DataFrame.transpose(*args, copy=False)[source]#
Transpose index and columns.
Reflect the DataFrame over its main diagonal by writing rows as columns
and vice-versa. The property T is an accessor to the method
transpose().
Parameters
*argstuple, optionalAccepted for compatibility with NumPy.
copybool, default FalseWhether to copy the data after transposing, even for DataFrames
with a single dtype.
Note that a copy is always required for mixed dtype DataFrames,
or for DataFrames with any extension types.
You can replace sorted values with reverse sorted values:
>>> df['column2'] = df.column.replace(df.column.sort_values().values,
df.column.sort_values(ascending=False))
>>> df
column column2
0 23 12
1 12 23
2 14 21
3 17 20
4 21 14
5 18 18
6 20 17
Returns
DataFrameThe transposed DataFrame.
See also
numpy.transposePermute the dimensions of a given array.
Notes
Transposing a DataFrame with mixed dtypes will result in a homogeneous
DataFrame with the object dtype. In such a case, a copy of the data
is always made.
Examples
Square DataFrame with homogeneous dtype
>>> d1 = {'col1': [1, 2], 'col2': [3, 4]}
>>> df1 = pd.DataFrame(data=d1)
>>> df1
col1 col2
0 1 3
1 2 4
>>> df1_transposed = df1.T # or df1.transpose()
>>> df1_transposed
0 1
col1 1 2
col2 3 4
When the dtype is homogeneous in the original DataFrame, we get a
transposed DataFrame with the same dtype:
>>> df1.dtypes
col1 int64
col2 int64
dtype: object
>>> df1_transposed.dtypes
0 int64
1 int64
dtype: object
Non-square DataFrame with mixed dtypes
>>> d2 = {'name': ['Alice', 'Bob'],
... 'score': [9.5, 8],
... 'employed': [False, True],
... 'kids': [0, 0]}
>>> df2 = pd.DataFrame(data=d2)
>>> df2
name score employed kids
0 Alice 9.5 False 0
1 Bob 8.0 True 0
>>> df2_transposed = df2.T # or df2.transpose()
>>> df2_transposed
0 1
name Alice Bob
score 9.5 8.0
employed False True
kids 0 0
When the DataFrame has mixed dtypes, we get a transposed DataFrame with
the object dtype:
>>> df2.dtypes
name object
score float64
employed bool
kids int64
dtype: object
>>> df2_transposed.dtypes
0 object
1 object
dtype: object
| 571
| 936
|
Reverse all the values in column from highest values to lowest values and vice versa in pandas
column column1
0 23 12
1 12 23
2 14 21
3 17 20
4 21 14
5 18 18
6 20 17
I want the highest value of the column to be replaced with the lowest value, 2nd highest with 2nd lowest like this for all values till the lowest value replaced with the highest value of the column as shown above in Expected Output. How to do that in pandas?
|
66,818,103
|
Converting a collection of txt files into ONE CSV file by using Python
|
<p>I have a folder with multiple txt files. Each file contain information of a client of my friend's business that he entered manually from a hardcopy document. These information can be e-mails, addresses, requestID, etc. Each time he get a new client he creates a new txt file in that folder.</p>
<p>Using Python, I want to create a CSV file that contain all information about all clients from the txt files so that I can open it on Excel. The files content looks like this:</p>
<pre><code>Date:24/02/2021
Email:*****@gmail.com
Product:Hard Drives
Type:Sandisk
Size:128GB
</code></pre>
<p>Some files have additional information. And each file is labeled by an ID (which is the name of the txt file).</p>
<p>What I'm thinking of is to make the code creates a dictionary for each file. Each dict will be named by the name of the txt file. The data types (date,email,product.etc) will be the indexes and (keep in mind that not all files has the same number of indexes as some files have more or less data than others) then there are the values. And then convert this collection of dicts into one CSV file that when opened in Excel should look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">FileID</th>
<th style="text-align: left;">Date</th>
<th style="text-align: left;">Email</th>
<th style="text-align: left;">Address</th>
<th style="text-align: left;">Product</th>
<th style="text-align: left;">Type</th>
<th style="text-align: left;">Color</th>
<th style="text-align: left;">Size</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">01-2021</td>
<td style="text-align: left;">02-01-2021</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">Hard Drive</td>
<td style="text-align: left;">SanDisk</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">128GB</td>
<td style="text-align: left;"></td>
</tr>
</tbody>
</table>
</div>
<p>Is this a good way to achieve this goal? or there is a shorter and more effective one?</p>
<p>This code by @dukkee seems to logically fulfill the task required:</p>
<pre><code>import os
import pandas as pd
FOLDER_PATH = "folder_path"
raw_data = []
for filename in os.listdir(FOLDER_PATH):
with open(os.path.join(FOLDER_PATH, filename)) as fp:
file_data = dict(line.split(":", 1) for line in fp if line)
file_data["FileID"] = filename
raw_data.append(file_data)
frame = pd.DataFrame(raw_data)
frame.to_csv("output.csv", index=False)
</code></pre>
<p>However, it keeps showing me this error:</p>
<p><a href="https://i.stack.imgur.com/Mo9CQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mo9CQ.png" alt="enter image description here" /></a></p>
<p>The following code by @dm2 should also work but it also shows me an error which I couldn't figure why:</p>
<pre><code>import pandas as pd
import os
files = os.listdir('test/')
df_list = [pd.read_csv(f'test/{file}', sep = ':', header = None).set_index(0).T for file in files]
df_out = pd.concat(df_list)
# to reindex by filename
df_out.index = [file.strip('.txt') for file in files]
</code></pre>
<p><a href="https://i.stack.imgur.com/NqcEV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NqcEV.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/ay2DD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ay2DD.png" alt="enter image description here" /></a></p>
<p>I made sure that all txt files has no empty lines but this wasn't the solution for these errors.</p>
| 66,818,446
| 2021-03-26T13:43:31.147000
| 2
| null | 1
| 76
|
python|pandas
|
<p>You can actually read these files into pandas DataFrames and then concatenate them into one single DataFrame.</p>
<p>I've made a test folder with 5 slightly different test files (named '1.txt.', '2.txt.', ...).</p>
<p>Code:</p>
<pre><code>import pandas as pd
import os
files = os.listdir('test/')
df_list = [pd.read_csv(f'test/{file}', sep = ':', header = None).set_index(0).T for file in files]
df_out = pd.concat(df_list)
# to reindex by filename
df_out.index = [file.strip('.txt') for file in files]
</code></pre>
<p>df_out:</p>
<pre><code>0 Date Email Product Type Size Size2 Type2 Test
1 24/02/2021 *****@gmail.com Hard Drives Sandisk 128GB 128GB NaN NaN
2 24/02/2021 *****@gmail.com Hard Drives Sandisk 128GB NaN Sandisk NaN
3 24/02/2021 *****@gmail.com Hard Drives Sandisk 128GB NaN NaN Test
4 24/02/2021 *****@gmail.com Hard Drives Sandisk 128GB NaN NaN 2
5 24/02/2021 *****@gmail.com Hard Drives Sandisk 128GB NaN NaN NaN
</code></pre>
| 2021-03-26T14:04:25.257000
| 1
|
https://pandas.pydata.org/docs/user_guide/io.html
|
IO tools (text, CSV, HDF5, …)#
IO tools (text, CSV, HDF5, …)#
The pandas I/O API is a set of top level reader functions accessed like
You can actually read these files into pandas DataFrames and then concatenate them into one single DataFrame.
I've made a test folder with 5 slightly different test files (named '1.txt.', '2.txt.', ...).
Code:
import pandas as pd
import os
files = os.listdir('test/')
df_list = [pd.read_csv(f'test/{file}', sep = ':', header = None).set_index(0).T for file in files]
df_out = pd.concat(df_list)
# to reindex by filename
df_out.index = [file.strip('.txt') for file in files]
df_out:
0 Date Email Product Type Size Size2 Type2 Test
1 24/02/2021 *****@gmail.com Hard Drives Sandisk 128GB 128GB NaN NaN
2 24/02/2021 *****@gmail.com Hard Drives Sandisk 128GB NaN Sandisk NaN
3 24/02/2021 *****@gmail.com Hard Drives Sandisk 128GB NaN NaN Test
4 24/02/2021 *****@gmail.com Hard Drives Sandisk 128GB NaN NaN 2
5 24/02/2021 *****@gmail.com Hard Drives Sandisk 128GB NaN NaN NaN
pandas.read_csv() that generally return a pandas object. The corresponding
writer functions are object methods that are accessed like
DataFrame.to_csv(). Below is a table containing available readers and
writers.
Format Type
Data Description
Reader
Writer
text
CSV
read_csv
to_csv
text
Fixed-Width Text File
read_fwf
text
JSON
read_json
to_json
text
HTML
read_html
to_html
text
LaTeX
Styler.to_latex
text
XML
read_xml
to_xml
text
Local clipboard
read_clipboard
to_clipboard
binary
MS Excel
read_excel
to_excel
binary
OpenDocument
read_excel
binary
HDF5 Format
read_hdf
to_hdf
binary
Feather Format
read_feather
to_feather
binary
Parquet Format
read_parquet
to_parquet
binary
ORC Format
read_orc
to_orc
binary
Stata
read_stata
to_stata
binary
SAS
read_sas
binary
SPSS
read_spss
binary
Python Pickle Format
read_pickle
to_pickle
SQL
SQL
read_sql
to_sql
SQL
Google BigQuery
read_gbq
to_gbq
Here is an informal performance comparison for some of these IO methods.
Note
For examples that use the StringIO class, make sure you import it
with from io import StringIO for Python 3.
CSV & text files#
The workhorse function for reading text files (a.k.a. flat files) is
read_csv(). See the cookbook for some advanced strategies.
Parsing options#
read_csv() accepts the following common arguments:
Basic#
filepath_or_buffervariousEither a path to a file (a str, pathlib.Path,
or py:py._path.local.LocalPath), URL (including http, ftp, and S3
locations), or any object with a read() method (such as an open file or
StringIO).
sepstr, defaults to ',' for read_csv(), \t for read_table()Delimiter to use. If sep is None, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will be
used and automatically detect the separator by Python’s builtin sniffer tool,
csv.Sniffer. In addition, separators longer than 1 character and
different from '\s+' will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: '\\r\\t'.
delimiterstr, default NoneAlternative argument name for sep.
delim_whitespaceboolean, default FalseSpecifies whether or not whitespace (e.g. ' ' or '\t')
will be used as the delimiter. Equivalent to setting sep='\s+'.
If this option is set to True, nothing should be passed in for the
delimiter parameter.
Column and index locations and names#
headerint or list of ints, default 'infer'Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first line of the file, if column names are
passed explicitly then the behavior is identical to
header=None. Explicitly pass header=0 to be able to replace
existing names.
The header can be a list of ints that specify row locations
for a MultiIndex on the columns e.g. [0,1,3]. Intervening rows
that are not specified will be skipped (e.g. 2 in this example is
skipped). Note that this parameter ignores commented lines and empty
lines if skip_blank_lines=True, so header=0 denotes the first
line of data rather than the first line of the file.
namesarray-like, default NoneList of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list are not allowed.
index_colint, str, sequence of int / str, or False, optional, default NoneColumn(s) to use as the row labels of the DataFrame, either given as
string name or column index. If a sequence of int / str is given, a
MultiIndex is used.
Note
index_col=False can be used to force pandas to not use the first
column as the index, e.g. when you have a malformed file with delimiters at
the end of each line.
The default value of None instructs pandas to guess. If the number of
fields in the column header row is equal to the number of fields in the body
of the data file, then a default index is used. If it is larger, then
the first columns are used as index so that the remaining number of fields in
the body are equal to the number of fields in the header.
The first row after the header is used to determine the number of columns,
which will go into the index. If the subsequent rows contain less columns
than the first row, they are filled with NaN.
This can be avoided through usecols. This ensures that the columns are
taken as is and the trailing data are ignored.
usecolslist-like or callable, default NoneReturn a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in names or
inferred from the document header row(s). If names are given, the document
header row(s) are not taken into account. For example, a valid list-like
usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To
instantiate a DataFrame from data with element order preserved use
pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns
in ['foo', 'bar'] order or
pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for
['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column names,
returning names where the callable function evaluates to True:
In [1]: import pandas as pd
In [2]: from io import StringIO
In [3]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [4]: pd.read_csv(StringIO(data))
Out[4]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [5]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["COL1", "COL3"])
Out[5]:
col1 col3
0 a 1
1 a 2
2 c 3
Using this parameter results in much faster parsing time and lower memory usage
when using the c engine. The Python engine loads the data first before deciding
which columns to drop.
squeezeboolean, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to {func_name} to squeeze
the data.
prefixstr, default NonePrefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
In [6]: data = "col1,col2,col3\na,b,1"
In [7]: df = pd.read_csv(StringIO(data))
In [8]: df.columns = [f"pre_{col}" for col in df.columns]
In [9]: df
Out[9]:
pre_col1 pre_col2 pre_col3
0 a b 1
mangle_dupe_colsboolean, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’…’X.N’, rather than ‘X’…’X’.
Passing in False will cause data to be overwritten if there are duplicate
names in the columns.
Deprecated since version 1.5.0: The argument was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
General parsing configuration#
dtypeType name or dict of column -> type, default NoneData type for data or columns. E.g. {'a': np.float64, 'b': np.int32, 'c': 'Int64'}
Use str or object together with suitable na_values settings to preserve
and not interpret dtype. If converters are specified, they will be applied INSTEAD
of dtype conversion.
New in version 1.5.0: Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.
engine{'c', 'python', 'pyarrow'}Parser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.
New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features
are unsupported, or may not work correctly, with this engine.
convertersdict, default NoneDict of functions for converting values in certain columns. Keys can either be
integers or column labels.
true_valueslist, default NoneValues to consider as True.
false_valueslist, default NoneValues to consider as False.
skipinitialspaceboolean, default FalseSkip spaces after delimiter.
skiprowslist-like or integer, default NoneLine numbers to skip (0-indexed) or number of lines to skip (int) at the start
of the file.
If callable, the callable function will be evaluated against the row
indices, returning True if the row should be skipped and False otherwise:
In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [11]: pd.read_csv(StringIO(data))
Out[11]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]:
col1 col2 col3
0 a b 2
skipfooterint, default 0Number of lines at bottom of file to skip (unsupported with engine=’c’).
nrowsint, default NoneNumber of rows of file to read. Useful for reading pieces of large files.
low_memoryboolean, default TrueInternally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
Note that the entire file is read into a single DataFrame regardless,
use the chunksize or iterator parameter to return the data in chunks.
(Only valid with C parser)
memory_mapboolean, default FalseIf a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this
option can improve performance because there is no longer any I/O overhead.
NA and missing data handling#
na_valuesscalar, str, list-like, or dict, default NoneAdditional strings to recognize as NA/NaN. If dict passed, specific per-column
NA values. See na values const below
for a list of the values interpreted as NaN by default.
keep_default_naboolean, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterboolean, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verboseboolean, default FalseIndicate number of NA values placed in non-numeric columns.
skip_blank_linesboolean, default TrueIf True, skip over blank lines rather than interpreting as NaN values.
Datetime handling#
parse_datesboolean or list of ints or names or list of lists or dict, default False.
If True -> try parsing the index.
If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date
column.
If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column.
If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’.
Note
A fast-path exists for iso8601-formatted dates.
infer_datetime_formatboolean, default FalseIf True and parse_dates is enabled for a column, attempt to infer the
datetime format to speed up the processing.
keep_date_colboolean, default FalseIf True and parse_dates specifies combining multiple columns then keep the
original columns.
date_parserfunction, default NoneFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays (as
defined by parse_dates) as arguments; 2) concatenate (row-wise) the string
values from the columns defined by parse_dates into a single array and pass
that; and 3) call date_parser once for each row using one or more strings
(corresponding to the columns defined by parse_dates) as arguments.
dayfirstboolean, default FalseDD/MM format dates, international and European format.
cache_datesboolean, default TrueIf True, use a cache of unique, converted dates to apply the datetime
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
New in version 0.25.0.
Iteration#
iteratorboolean, default FalseReturn TextFileReader object for iteration or getting chunks with
get_chunk().
chunksizeint, default NoneReturn TextFileReader object for iteration. See iterating and chunking below.
Quoting, compression, and file format#
compression{'infer', 'gzip', 'bz2', 'zip', 'xz', 'zstd', None, dict}, default 'infer'For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip,
bz2, zip, xz, or zstandard if filepath_or_buffer is path-like ending in ‘.gz’, ‘.bz2’,
‘.zip’, ‘.xz’, ‘.zst’, respectively, and no decompression otherwise. If using ‘zip’,
the ZIP file must contain only one data file to be read in.
Set to None for no decompression. Can also be a dict with key 'method'
set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are
forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor.
As an example, the following could be passed for faster compression and to
create a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
Changed in version 1.1.0: dict option extended to support gzip and bz2.
Changed in version 1.2.0: Previous versions forwarded dict entries for ‘gzip’ to gzip.open.
thousandsstr, default NoneThousands separator.
decimalstr, default '.'Character to recognize as decimal point. E.g. use ',' for European data.
float_precisionstring, default NoneSpecifies which converter the C engine should use for floating-point values.
The options are None for the ordinary converter, high for the
high-precision converter, and round_trip for the round-trip converter.
lineterminatorstr (length 1), default NoneCharacter to break file into lines. Only valid with C parser.
quotecharstr (length 1)The character used to denote the start and end of a quoted item. Quoted items
can include the delimiter and it will be ignored.
quotingint or csv.QUOTE_* instance, default 0Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
doublequoteboolean, default TrueWhen quotechar is specified and quoting is not QUOTE_NONE,
indicate whether or not to interpret two consecutive quotechar elements
inside a field as a single quotechar element.
escapecharstr (length 1), default NoneOne-character string used to escape delimiter when quoting is QUOTE_NONE.
commentstr, default NoneIndicates remainder of line should not be parsed. If found at the beginning of
a line, the line will be ignored altogether. This parameter must be a single
character. Like empty lines (as long as skip_blank_lines=True), fully
commented lines are ignored by the parameter header but not by skiprows.
For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with
header=0 will result in ‘a,b,c’ being treated as the header.
encodingstr, default NoneEncoding to use for UTF when reading/writing (e.g. 'utf-8'). List of
Python standard encodings.
dialectstr or csv.Dialect instance, default NoneIf provided, this parameter will override values (default or not) for the
following parameters: delimiter, doublequote, escapechar,
skipinitialspace, quotechar, and quoting. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
Error handling#
error_bad_linesboolean, optional, default NoneLines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be
returned. If False, then these “bad lines” will dropped from the
DataFrame that is returned. See bad lines
below.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
warn_bad_linesboolean, optional, default NoneIf error_bad_lines is False, and warn_bad_lines is True, a warning for
each “bad line” will be output.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
on_bad_lines(‘error’, ‘warn’, ‘skip’), default ‘error’Specifies what to do upon encountering a bad line (a line with too many fields).
Allowed values are :
‘error’, raise an ParserError when a bad line is encountered.
‘warn’, print a warning when a bad line is encountered and skip that line.
‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0.
Specifying column data types#
You can indicate the data type for the whole DataFrame or individual
columns:
In [13]: import numpy as np
In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11
In [16]: df = pd.read_csv(StringIO(data), dtype=object)
In [17]: df
Out[17]:
a b c d
0 1 2 3 4
1 5 6 7 8
2 9 10 11 NaN
In [18]: df["a"][0]
Out[18]: '1'
In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
In [20]: df.dtypes
Out[20]:
a int64
b object
c float64
d Int64
dtype: object
Fortunately, pandas offers more than one way to ensure that your column(s)
contain only one dtype. If you’re unfamiliar with these concepts, you can
see here to learn more about dtypes, and
here to learn more about object conversion in
pandas.
For instance, you can use the converters argument
of read_csv():
In [21]: data = "col_1\n1\n2\n'A'\n4.22"
In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
In [23]: df
Out[23]:
col_1
0 1
1 2
2 'A'
3 4.22
In [24]: df["col_1"].apply(type).value_counts()
Out[24]:
<class 'str'> 4
Name: col_1, dtype: int64
Or you can use the to_numeric() function to coerce the
dtypes after reading in the data,
In [25]: df2 = pd.read_csv(StringIO(data))
In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
In [27]: df2
Out[27]:
col_1
0 1.00
1 2.00
2 NaN
3 4.22
In [28]: df2["col_1"].apply(type).value_counts()
Out[28]:
<class 'float'> 4
Name: col_1, dtype: int64
which will convert all valid parsing to floats, leaving the invalid parsing
as NaN.
Ultimately, how you deal with reading in columns containing mixed dtypes
depends on your specific needs. In the case above, if you wanted to NaN out
the data anomalies, then to_numeric() is probably your best option.
However, if you wanted for all the data to be coerced, no matter the type, then
using the converters argument of read_csv() would certainly be
worth trying.
Note
In some cases, reading in abnormal data with columns containing mixed dtypes
will result in an inconsistent dataset. If you rely on pandas to infer the
dtypes of your columns, the parsing engine will go and infer the dtypes for
different chunks of the data, rather than the whole dataset at once. Consequently,
you can end up with column(s) with mixed dtypes. For example,
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
In [30]: df = pd.DataFrame({"col_1": col_1})
In [31]: df.to_csv("foo.csv")
In [32]: mixed_df = pd.read_csv("foo.csv")
In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]:
<class 'int'> 737858
<class 'str'> 262144
Name: col_1, dtype: int64
In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
will result with mixed_df containing an int dtype for certain chunks
of the column, and str for others due to the mixed dtypes from the
data that was read in. It is important to note that the overall column will be
marked with a dtype of object, which is used for columns with mixed dtypes.
Specifying categorical dtype#
Categorical columns can be parsed directly by specifying dtype='category' or
dtype=CategoricalDtype(categories, ordered).
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [36]: pd.read_csv(StringIO(data))
Out[36]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]:
col1 object
col2 object
col3 int64
dtype: object
In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]:
col1 category
col2 category
col3 category
dtype: object
Individual columns can be parsed as a Categorical using a dict
specification:
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]:
col1 category
col2 object
col3 int64
dtype: object
Specifying dtype='category' will result in an unordered Categorical
whose categories are the unique values observed in the data. For more
control on the categories and order, create a
CategoricalDtype ahead of time, and pass that for
that column’s dtype.
In [40]: from pandas.api.types import CategoricalDtype
In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]:
col1 category
col2 object
col3 int64
dtype: object
When using dtype=CategoricalDtype, “unexpected” values outside of
dtype.categories are treated as missing values.
In [43]: dtype = CategoricalDtype(["a", "b", "d"]) # No 'c'
In [44]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).col1
Out[44]:
0 a
1 a
2 NaN
Name: col1, dtype: category
Categories (3, object): ['a', 'b', 'd']
This matches the behavior of Categorical.set_categories().
Note
With dtype='category', the resulting categories will always be parsed
as strings (object dtype). If the categories are numeric they can be
converted using the to_numeric() function, or as appropriate, another
converter such as to_datetime().
When dtype is a CategoricalDtype with homogeneous categories (
all numeric, all datetimes, etc.), the conversion is done automatically.
In [45]: df = pd.read_csv(StringIO(data), dtype="category")
In [46]: df.dtypes
Out[46]:
col1 category
col2 category
col3 category
dtype: object
In [47]: df["col3"]
Out[47]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): ['1', '2', '3']
In [48]: new_categories = pd.to_numeric(df["col3"].cat.categories)
In [49]: df["col3"] = df["col3"].cat.rename_categories(new_categories)
In [50]: df["col3"]
Out[50]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
Naming and using columns#
Handling column names#
A file may or may not have a header row. pandas assumes the first row should be
used as the column names:
In [51]: data = "a,b,c\n1,2,3\n4,5,6\n7,8,9"
In [52]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [53]: pd.read_csv(StringIO(data))
Out[53]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
By specifying the names argument in conjunction with header you can
indicate other names to use and whether or not to throw away the header row (if
any):
In [54]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [55]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=0)
Out[55]:
foo bar baz
0 1 2 3
1 4 5 6
2 7 8 9
In [56]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=None)
Out[56]:
foo bar baz
0 a b c
1 1 2 3
2 4 5 6
3 7 8 9
If the header is in a row other than the first, pass the row number to
header. This will skip the preceding rows:
In [57]: data = "skip this skip it\na,b,c\n1,2,3\n4,5,6\n7,8,9"
In [58]: pd.read_csv(StringIO(data), header=1)
Out[58]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
Note
Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first non-blank line of the file, if column
names are passed explicitly then the behavior is identical to
header=None.
Duplicate names parsing#
Deprecated since version 1.5.0: mangle_dupe_cols was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
If the file or header contains duplicate names, pandas will by default
distinguish between them so as to prevent overwriting data:
In [59]: data = "a,b,a\n0,1,2\n3,4,5"
In [60]: pd.read_csv(StringIO(data))
Out[60]:
a b a.1
0 0 1 2
1 3 4 5
There is no more duplicate data because mangle_dupe_cols=True by default,
which modifies a series of duplicate columns ‘X’, …, ‘X’ to become
‘X’, ‘X.1’, …, ‘X.N’.
Filtering columns (usecols)#
The usecols argument allows you to select any subset of the columns in a
file, either using the column names, position numbers or a callable:
In [61]: data = "a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz"
In [62]: pd.read_csv(StringIO(data))
Out[62]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz
In [63]: pd.read_csv(StringIO(data), usecols=["b", "d"])
Out[63]:
b d
0 2 foo
1 5 bar
2 8 baz
In [64]: pd.read_csv(StringIO(data), usecols=[0, 2, 3])
Out[64]:
a c d
0 1 3 foo
1 4 6 bar
2 7 9 baz
In [65]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["A", "C"])
Out[65]:
a c
0 1 3
1 4 6
2 7 9
The usecols argument can also be used to specify which columns not to
use in the final result:
In [66]: pd.read_csv(StringIO(data), usecols=lambda x: x not in ["a", "c"])
Out[66]:
b d
0 2 foo
1 5 bar
2 8 baz
In this case, the callable is specifying that we exclude the “a” and “c”
columns from the output.
Comments and empty lines#
Ignoring line comments and empty lines#
If the comment parameter is specified, then completely commented lines will
be ignored. By default, completely blank lines will be ignored as well.
In [67]: data = "\na,b,c\n \n# commented line\n1,2,3\n\n4,5,6"
In [68]: print(data)
a,b,c
# commented line
1,2,3
4,5,6
In [69]: pd.read_csv(StringIO(data), comment="#")
Out[69]:
a b c
0 1 2 3
1 4 5 6
If skip_blank_lines=False, then read_csv will not ignore blank lines:
In [70]: data = "a,b,c\n\n1,2,3\n\n\n4,5,6"
In [71]: pd.read_csv(StringIO(data), skip_blank_lines=False)
Out[71]:
a b c
0 NaN NaN NaN
1 1.0 2.0 3.0
2 NaN NaN NaN
3 NaN NaN NaN
4 4.0 5.0 6.0
Warning
The presence of ignored lines might create ambiguities involving line numbers;
the parameter header uses row numbers (ignoring commented/empty
lines), while skiprows uses line numbers (including commented/empty lines):
In [72]: data = "#comment\na,b,c\nA,B,C\n1,2,3"
In [73]: pd.read_csv(StringIO(data), comment="#", header=1)
Out[73]:
A B C
0 1 2 3
In [74]: data = "A,B,C\n#comment\na,b,c\n1,2,3"
In [75]: pd.read_csv(StringIO(data), comment="#", skiprows=2)
Out[75]:
a b c
0 1 2 3
If both header and skiprows are specified, header will be
relative to the end of skiprows. For example:
In [76]: data = (
....: "# empty\n"
....: "# second empty line\n"
....: "# third emptyline\n"
....: "X,Y,Z\n"
....: "1,2,3\n"
....: "A,B,C\n"
....: "1,2.,4.\n"
....: "5.,NaN,10.0\n"
....: )
....:
In [77]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
1,2,3
A,B,C
1,2.,4.
5.,NaN,10.0
In [78]: pd.read_csv(StringIO(data), comment="#", skiprows=4, header=1)
Out[78]:
A B C
0 1.0 2.0 4.0
1 5.0 NaN 10.0
Comments#
Sometimes comments or meta data may be included in a file:
In [79]: print(open("tmp.csv").read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome
By default, the parser includes the comments in the output:
In [80]: df = pd.read_csv("tmp.csv")
In [81]: df
Out[81]:
ID level category
0 Patient1 123000 x # really unpleasant
1 Patient2 23000 y # wouldn't take his medicine
2 Patient3 1234018 z # awesome
We can suppress the comments using the comment keyword:
In [82]: df = pd.read_csv("tmp.csv", comment="#")
In [83]: df
Out[83]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
Dealing with Unicode data#
The encoding argument should be used for encoded unicode data, which will
result in byte strings being decoded to unicode in the result:
In [84]: from io import BytesIO
In [85]: data = b"word,length\n" b"Tr\xc3\xa4umen,7\n" b"Gr\xc3\xbc\xc3\x9fe,5"
In [86]: data = data.decode("utf8").encode("latin-1")
In [87]: df = pd.read_csv(BytesIO(data), encoding="latin-1")
In [88]: df
Out[88]:
word length
0 Träumen 7
1 Grüße 5
In [89]: df["word"][1]
Out[89]: 'Grüße'
Some formats which encode all characters as multiple bytes, like UTF-16, won’t
parse correctly at all without specifying the encoding. Full list of Python
standard encodings.
Index columns and trailing delimiters#
If a file has one more column of data than the number of column names, the
first column will be used as the DataFrame’s row names:
In [90]: data = "a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [91]: pd.read_csv(StringIO(data))
Out[91]:
a b c
4 apple bat 5.7
8 orange cow 10.0
In [92]: data = "index,a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [93]: pd.read_csv(StringIO(data), index_col=0)
Out[93]:
a b c
index
4 apple bat 5.7
8 orange cow 10.0
Ordinarily, you can achieve this behavior using the index_col option.
There are some exception cases when a file has been prepared with delimiters at
the end of each data line, confusing the parser. To explicitly disable the
index column inference and discard the last column, pass index_col=False:
In [94]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [95]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [96]: pd.read_csv(StringIO(data))
Out[96]:
a b c
4 apple bat NaN
8 orange cow NaN
In [97]: pd.read_csv(StringIO(data), index_col=False)
Out[97]:
a b c
0 4 apple bat
1 8 orange cow
If a subset of data is being parsed using the usecols option, the
index_col specification is based on that subset, not the original data.
In [98]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [99]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [100]: pd.read_csv(StringIO(data), usecols=["b", "c"])
Out[100]:
b c
4 bat NaN
8 cow NaN
In [101]: pd.read_csv(StringIO(data), usecols=["b", "c"], index_col=0)
Out[101]:
b c
4 bat NaN
8 cow NaN
Date Handling#
Specifying date columns#
To better facilitate working with datetime data, read_csv()
uses the keyword arguments parse_dates and date_parser
to allow users to specify a variety of columns and date/time formats to turn the
input text data into datetime objects.
The simplest case is to just pass in parse_dates=True:
In [102]: with open("foo.csv", mode="w") as f:
.....: f.write("date,A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5")
.....:
# Use a column as an index, and parse it as dates.
In [103]: df = pd.read_csv("foo.csv", index_col=0, parse_dates=True)
In [104]: df
Out[104]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
# These are Python datetime objects
In [105]: df.index
Out[105]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', name='date', freq=None)
It is often the case that we may want to store date and time data separately,
or store various date fields separately. the parse_dates keyword can be
used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates, the resulting date
columns will be prepended to the output (so as to not affect the existing column
order) and the new column names will be the concatenation of the component
column names:
In [106]: data = (
.....: "KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
.....: "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
.....: "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n"
.....: "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n"
.....: "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n"
.....: "KORD,19990127, 23:00:00, 22:56:00, -0.5900"
.....: )
.....:
In [107]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [108]: df = pd.read_csv("tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]])
In [109]: df
Out[109]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
By default the parser removes the component date columns, but you can choose
to retain them via the keep_date_col keyword:
In [110]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]], keep_date_col=True
.....: )
.....:
In [111]: df
Out[111]:
1_2 1_3 0 ... 2 3 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD ... 19:00:00 18:56:00 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD ... 20:00:00 19:56:00 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD ... 21:00:00 20:56:00 -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD ... 21:00:00 21:18:00 -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD ... 22:00:00 21:56:00 -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD ... 23:00:00 22:56:00 -0.59
[6 rows x 7 columns]
Note that if you wish to combine multiple columns into a single date column, a
nested list must be used. In other words, parse_dates=[1, 2] indicates that
the second and third columns should each be parsed as separate date columns
while parse_dates=[[1, 2]] means the two columns should be parsed into a
single column.
You can also use a dict to specify custom name columns:
In [112]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [113]: df = pd.read_csv("tmp.csv", header=None, parse_dates=date_spec)
In [114]: df
Out[114]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
It is important to remember that if multiple text columns are to be parsed into
a single date column, then a new column is prepended to the data. The index_col
specification is based off of this new set of columns rather than the original
data columns:
In [115]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [116]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, index_col=0
.....: ) # index is the nominal column
.....:
In [117]: df
Out[117]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Note
If a column or index contains an unparsable date, the entire column or
index will be returned unaltered as an object data type. For non-standard
datetime parsing, use to_datetime() after pd.read_csv.
Note
read_csv has a fast_path for parsing datetime strings in iso8601 format,
e.g “2000-01-01T00:01:02+00:00” and similar variations. If you can arrange
for your data to store datetimes in this format, load times will be
significantly faster, ~20x has been observed.
Date parsing functions#
Finally, the parser allows you to specify a custom date_parser function to
take full advantage of the flexibility of the date parsing API:
In [118]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, date_parser=pd.to_datetime
.....: )
.....:
In [119]: df
Out[119]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
pandas will try to call the date_parser function in three different ways. If
an exception is raised, the next one is tried:
date_parser is first called with one or more arrays as arguments,
as defined using parse_dates (e.g., date_parser(['2013', '2013'], ['1', '2'])).
If #1 fails, date_parser is called with all the columns
concatenated row-wise into a single array (e.g., date_parser(['2013 1', '2013 2'])).
Note that performance-wise, you should try these methods of parsing dates in order:
Try to infer the format using infer_datetime_format=True (see section below).
If you know the format, use pd.to_datetime():
date_parser=lambda x: pd.to_datetime(x, format=...).
If you have a really non-standard format, use a custom date_parser function.
For optimal performance, this should be vectorized, i.e., it should accept arrays
as arguments.
Parsing a CSV with mixed timezones#
pandas cannot natively represent a column or index with mixed timezones. If your CSV
file contains columns with a mixture of timezones, the default result will be
an object-dtype column with strings, even with parse_dates.
In [120]: content = """\
.....: a
.....: 2000-01-01T00:00:00+05:00
.....: 2000-01-01T00:00:00+06:00"""
.....:
In [121]: df = pd.read_csv(StringIO(content), parse_dates=["a"])
In [122]: df["a"]
Out[122]:
0 2000-01-01 00:00:00+05:00
1 2000-01-01 00:00:00+06:00
Name: a, dtype: object
To parse the mixed-timezone values as a datetime column, pass a partially-applied
to_datetime() with utc=True as the date_parser.
In [123]: df = pd.read_csv(
.....: StringIO(content),
.....: parse_dates=["a"],
.....: date_parser=lambda col: pd.to_datetime(col, utc=True),
.....: )
.....:
In [124]: df["a"]
Out[124]:
0 1999-12-31 19:00:00+00:00
1 1999-12-31 18:00:00+00:00
Name: a, dtype: datetime64[ns, UTC]
Inferring datetime format#
If you have parse_dates enabled for some or all of your columns, and your
datetime strings are all formatted the same way, you may get a large speed
up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means
of parsing the strings. 5-10x parsing speeds have been observed. pandas
will fallback to the usual parsing if either the format cannot be guessed
or the format that was guessed cannot properly parse the entire column
of strings. So in general, infer_datetime_format should not have any
negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All
representing December 30th, 2011 at 00:00:00):
“20111230”
“2011/12/30”
“20111230 00:00:00”
“12/30/2011 00:00:00”
“30/Dec/2011 00:00:00”
“30/December/2011 00:00:00”
Note that infer_datetime_format is sensitive to dayfirst. With
dayfirst=True, it will guess “01/12/2011” to be December 1st. With
dayfirst=False (default) it will guess “01/12/2011” to be January 12th.
# Try to infer the format for the index column
In [125]: df = pd.read_csv(
.....: "foo.csv",
.....: index_col=0,
.....: parse_dates=True,
.....: infer_datetime_format=True,
.....: )
.....:
In [126]: df
Out[126]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
International date formats#
While US date formats tend to be MM/DD/YYYY, many international formats use
DD/MM/YYYY instead. For convenience, a dayfirst keyword is provided:
In [127]: data = "date,value,cat\n1/6/2000,5,a\n2/6/2000,10,b\n3/6/2000,15,c"
In [128]: print(data)
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c
In [129]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [130]: pd.read_csv("tmp.csv", parse_dates=[0])
Out[130]:
date value cat
0 2000-01-06 5 a
1 2000-02-06 10 b
2 2000-03-06 15 c
In [131]: pd.read_csv("tmp.csv", dayfirst=True, parse_dates=[0])
Out[131]:
date value cat
0 2000-06-01 5 a
1 2000-06-02 10 b
2 2000-06-03 15 c
Writing CSVs to binary file objects#
New in version 1.2.0.
df.to_csv(..., mode="wb") allows writing a CSV to a file object
opened binary mode. In most cases, it is not necessary to specify
mode as Pandas will auto-detect whether the file object is
opened in text or binary mode.
In [132]: import io
In [133]: data = pd.DataFrame([0, 1, 2])
In [134]: buffer = io.BytesIO()
In [135]: data.to_csv(buffer, encoding="utf-8", compression="gzip")
Specifying method for floating-point conversion#
The parameter float_precision can be specified in order to use
a specific floating-point converter during parsing with the C engine.
The options are the ordinary converter, the high-precision converter, and
the round-trip converter (which is guaranteed to round-trip values after
writing to a file). For example:
In [136]: val = "0.3066101993807095471566981359501369297504425048828125"
In [137]: data = "a,b,c\n1,2,{0}".format(val)
In [138]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision=None,
.....: )["c"][0] - float(val)
.....: )
.....:
Out[138]: 5.551115123125783e-17
In [139]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision="high",
.....: )["c"][0] - float(val)
.....: )
.....:
Out[139]: 5.551115123125783e-17
In [140]: abs(
.....: pd.read_csv(StringIO(data), engine="c", float_precision="round_trip")["c"][0]
.....: - float(val)
.....: )
.....:
Out[140]: 0.0
Thousand separators#
For large numbers that have been written with a thousands separator, you can
set the thousands keyword to a string of length 1 so that integers will be parsed
correctly:
By default, numbers with a thousands separator will be parsed as strings:
In [141]: data = (
.....: "ID|level|category\n"
.....: "Patient1|123,000|x\n"
.....: "Patient2|23,000|y\n"
.....: "Patient3|1,234,018|z"
.....: )
.....:
In [142]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [143]: df = pd.read_csv("tmp.csv", sep="|")
In [144]: df
Out[144]:
ID level category
0 Patient1 123,000 x
1 Patient2 23,000 y
2 Patient3 1,234,018 z
In [145]: df.level.dtype
Out[145]: dtype('O')
The thousands keyword allows integers to be parsed correctly:
In [146]: df = pd.read_csv("tmp.csv", sep="|", thousands=",")
In [147]: df
Out[147]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
In [148]: df.level.dtype
Out[148]: dtype('int64')
NA values#
To control which values are parsed as missing values (which are signified by
NaN), specify a string in na_values. If you specify a list of strings,
then all values in it are considered to be missing values. If you specify a
number (a float, like 5.0 or an integer like 5), the
corresponding equivalent values will also imply a missing value (in this case
effectively [5.0, 5] are recognized as NaN).
To completely override the default values that are recognized as missing, specify keep_default_na=False.
The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A',
'n/a', 'NA', '<NA>', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', ''].
Let us consider some examples:
pd.read_csv("path_to_file.csv", na_values=[5])
In the example above 5 and 5.0 will be recognized as NaN, in
addition to the defaults. A string will first be interpreted as a numerical
5, then as a NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=[""])
Above, only an empty field will be recognized as NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=["NA", "0"])
Above, both NA and 0 as strings are NaN.
pd.read_csv("path_to_file.csv", na_values=["Nope"])
The default values, in addition to the string "Nope" are recognized as
NaN.
Infinity#
inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity).
These will ignore the case of the value, meaning Inf, will also be parsed as np.inf.
Returning Series#
Using the squeeze keyword, the parser will return output with a single column
as a Series:
Deprecated since version 1.4.0: Users should append .squeeze("columns") to the DataFrame returned by
read_csv instead.
In [149]: data = "level\nPatient1,123000\nPatient2,23000\nPatient3,1234018"
In [150]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [151]: print(open("tmp.csv").read())
level
Patient1,123000
Patient2,23000
Patient3,1234018
In [152]: output = pd.read_csv("tmp.csv", squeeze=True)
In [153]: output
Out[153]:
Patient1 123000
Patient2 23000
Patient3 1234018
Name: level, dtype: int64
In [154]: type(output)
Out[154]: pandas.core.series.Series
Boolean values#
The common values True, False, TRUE, and FALSE are all
recognized as boolean. Occasionally you might want to recognize other values
as being boolean. To do this, use the true_values and false_values
options as follows:
In [155]: data = "a,b,c\n1,Yes,2\n3,No,4"
In [156]: print(data)
a,b,c
1,Yes,2
3,No,4
In [157]: pd.read_csv(StringIO(data))
Out[157]:
a b c
0 1 Yes 2
1 3 No 4
In [158]: pd.read_csv(StringIO(data), true_values=["Yes"], false_values=["No"])
Out[158]:
a b c
0 1 True 2
1 3 False 4
Handling “bad” lines#
Some files may have malformed lines with too few fields or too many. Lines with
too few fields will have NA values filled in the trailing fields. Lines with
too many fields will raise an error by default:
In [159]: data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
In [160]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
Cell In[160], line 1
----> 1 pd.read_csv(StringIO(data))
File ~/work/pandas/pandas/pandas/util/_decorators.py:211, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs)
209 else:
210 kwargs[new_arg_name] = new_arg_value
--> 211 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
325 if len(args) > num_allow_args:
326 warnings.warn(
327 msg.format(arguments=_format_argument_list(allow_args)),
328 FutureWarning,
329 stacklevel=find_stack_level(),
330 )
--> 331 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:950, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
935 kwds_defaults = _refine_defaults_read(
936 dialect,
937 delimiter,
(...)
946 defaults={"delimiter": ","},
947 )
948 kwds.update(kwds_defaults)
--> 950 return _read(filepath_or_buffer, kwds)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:611, in _read(filepath_or_buffer, kwds)
608 return parser
610 with parser:
--> 611 return parser.read(nrows)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:1778, in TextFileReader.read(self, nrows)
1771 nrows = validate_integer("nrows", nrows)
1772 try:
1773 # error: "ParserBase" has no attribute "read"
1774 (
1775 index,
1776 columns,
1777 col_dict,
-> 1778 ) = self._engine.read( # type: ignore[attr-defined]
1779 nrows
1780 )
1781 except Exception:
1782 self.close()
File ~/work/pandas/pandas/pandas/io/parsers/c_parser_wrapper.py:230, in CParserWrapper.read(self, nrows)
228 try:
229 if self.low_memory:
--> 230 chunks = self._reader.read_low_memory(nrows)
231 # destructive to chunks
232 data = _concatenate_chunks(chunks)
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:808, in pandas._libs.parsers.TextReader.read_low_memory()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:866, in pandas._libs.parsers.TextReader._read_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:852, in pandas._libs.parsers.TextReader._tokenize_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:1973, in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4
You can elect to skip bad lines:
In [29]: pd.read_csv(StringIO(data), on_bad_lines="warn")
Skipping line 3: expected 3 fields, saw 4
Out[29]:
a b c
0 1 2 3
1 8 9 10
Or pass a callable function to handle the bad line if engine="python".
The bad line will be a list of strings that was split by the sep:
In [29]: external_list = []
In [30]: def bad_lines_func(line):
...: external_list.append(line)
...: return line[-3:]
In [31]: pd.read_csv(StringIO(data), on_bad_lines=bad_lines_func, engine="python")
Out[31]:
a b c
0 1 2 3
1 5 6 7
2 8 9 10
In [32]: external_list
Out[32]: [4, 5, 6, 7]
.. versionadded:: 1.4.0
You can also use the usecols parameter to eliminate extraneous column
data that appear in some lines but not others:
In [33]: pd.read_csv(StringIO(data), usecols=[0, 1, 2])
Out[33]:
a b c
0 1 2 3
1 4 5 6
2 8 9 10
In case you want to keep all data including the lines with too many fields, you can
specify a sufficient number of names. This ensures that lines with not enough
fields are filled with NaN.
In [34]: pd.read_csv(StringIO(data), names=['a', 'b', 'c', 'd'])
Out[34]:
a b c d
0 1 2 3 NaN
1 4 5 6 7
2 8 9 10 NaN
Dialect#
The dialect keyword gives greater flexibility in specifying the file format.
By default it uses the Excel dialect but you can specify either the dialect name
or a csv.Dialect instance.
Suppose you had data with unenclosed quotes:
In [161]: data = "label1,label2,label3\n" 'index1,"a,c,e\n' "index2,b,d,f"
In [162]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f
By default, read_csv uses the Excel dialect and treats the double quote as
the quote character, which causes it to fail when it finds a newline before it
finds the closing double quote.
We can get around this using dialect:
In [163]: import csv
In [164]: dia = csv.excel()
In [165]: dia.quoting = csv.QUOTE_NONE
In [166]: pd.read_csv(StringIO(data), dialect=dia)
Out[166]:
label1 label2 label3
index1 "a c e
index2 b d f
All of the dialect options can be specified separately by keyword arguments:
In [167]: data = "a,b,c~1,2,3~4,5,6"
In [168]: pd.read_csv(StringIO(data), lineterminator="~")
Out[168]:
a b c
0 1 2 3
1 4 5 6
Another common dialect option is skipinitialspace, to skip any whitespace
after a delimiter:
In [169]: data = "a, b, c\n1, 2, 3\n4, 5, 6"
In [170]: print(data)
a, b, c
1, 2, 3
4, 5, 6
In [171]: pd.read_csv(StringIO(data), skipinitialspace=True)
Out[171]:
a b c
0 1 2 3
1 4 5 6
The parsers make every attempt to “do the right thing” and not be fragile. Type
inference is a pretty big deal. If a column can be coerced to integer dtype
without altering the contents, the parser will do so. Any non-numeric
columns will come through as object dtype as with the rest of pandas objects.
Quoting and Escape Characters#
Quotes (and other escape characters) in embedded fields can be handled in any
number of ways. One way is to use backslashes; to properly parse this data, you
should pass the escapechar option:
In [172]: data = 'a,b\n"hello, \\"Bob\\", nice to see you",5'
In [173]: print(data)
a,b
"hello, \"Bob\", nice to see you",5
In [174]: pd.read_csv(StringIO(data), escapechar="\\")
Out[174]:
a b
0 hello, "Bob", nice to see you 5
Files with fixed width columns#
While read_csv() reads delimited data, the read_fwf() function works
with data files that have known and fixed column widths. The function parameters
to read_fwf are largely the same as read_csv with two extra parameters, and
a different usage of the delimiter parameter:
colspecs: A list of pairs (tuples) giving the extents of the
fixed-width fields of each line as half-open intervals (i.e., [from, to[ ).
String value ‘infer’ can be used to instruct the parser to try detecting
the column specifications from the first 100 rows of the data. Default
behavior, if not specified, is to infer.
widths: A list of field widths which can be used instead of ‘colspecs’
if the intervals are contiguous.
delimiter: Characters to consider as filler characters in the fixed-width file.
Can be used to specify the filler character of the fields
if it is not spaces (e.g., ‘~’).
Consider a typical fixed-width data file:
In [175]: data1 = (
.....: "id8141 360.242940 149.910199 11950.7\n"
.....: "id1594 444.953632 166.985655 11788.4\n"
.....: "id1849 364.136849 183.628767 11806.2\n"
.....: "id1230 413.836124 184.375703 11916.8\n"
.....: "id1948 502.953953 173.237159 12468.3"
.....: )
.....:
In [176]: with open("bar.csv", "w") as f:
.....: f.write(data1)
.....:
In order to parse this file into a DataFrame, we simply need to supply the
column specifications to the read_fwf function along with the file name:
# Column specifications are a list of half-intervals
In [177]: colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)]
In [178]: df = pd.read_fwf("bar.csv", colspecs=colspecs, header=None, index_col=0)
In [179]: df
Out[179]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
Note how the parser automatically picks column names X.<column number> when
header=None argument is specified. Alternatively, you can supply just the
column widths for contiguous columns:
# Widths are a list of integers
In [180]: widths = [6, 14, 13, 10]
In [181]: df = pd.read_fwf("bar.csv", widths=widths, header=None)
In [182]: df
Out[182]:
0 1 2 3
0 id8141 360.242940 149.910199 11950.7
1 id1594 444.953632 166.985655 11788.4
2 id1849 364.136849 183.628767 11806.2
3 id1230 413.836124 184.375703 11916.8
4 id1948 502.953953 173.237159 12468.3
The parser will take care of extra white spaces around the columns
so it’s ok to have extra separation between the columns in the file.
By default, read_fwf will try to infer the file’s colspecs by using the
first 100 rows of the file. It can do it only in cases when the columns are
aligned and correctly separated by the provided delimiter (default delimiter
is whitespace).
In [183]: df = pd.read_fwf("bar.csv", header=None, index_col=0)
In [184]: df
Out[184]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
read_fwf supports the dtype parameter for specifying the types of
parsed columns to be different from the inferred type.
In [185]: pd.read_fwf("bar.csv", header=None, index_col=0).dtypes
Out[185]:
1 float64
2 float64
3 float64
dtype: object
In [186]: pd.read_fwf("bar.csv", header=None, dtype={2: "object"}).dtypes
Out[186]:
0 object
1 float64
2 object
3 float64
dtype: object
Indexes#
Files with an “implicit” index column#
Consider a file with one less entry in the header than the number of data
column:
In [187]: data = "A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5"
In [188]: print(data)
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
In [189]: with open("foo.csv", "w") as f:
.....: f.write(data)
.....:
In this special case, read_csv assumes that the first column is to be used
as the index of the DataFrame:
In [190]: pd.read_csv("foo.csv")
Out[190]:
A B C
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
Note that the dates weren’t automatically parsed. In that case you would need
to do as before:
In [191]: df = pd.read_csv("foo.csv", parse_dates=True)
In [192]: df.index
Out[192]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', freq=None)
Reading an index with a MultiIndex#
Suppose you have data indexed by two columns:
In [193]: data = 'year,indiv,zit,xit\n1977,"A",1.2,.6\n1977,"B",1.5,.5'
In [194]: print(data)
year,indiv,zit,xit
1977,"A",1.2,.6
1977,"B",1.5,.5
In [195]: with open("mindex_ex.csv", mode="w") as f:
.....: f.write(data)
.....:
The index_col argument to read_csv can take a list of
column numbers to turn multiple columns into a MultiIndex for the index of the
returned object:
In [196]: df = pd.read_csv("mindex_ex.csv", index_col=[0, 1])
In [197]: df
Out[197]:
zit xit
year indiv
1977 A 1.2 0.6
B 1.5 0.5
In [198]: df.loc[1977]
Out[198]:
zit xit
indiv
A 1.2 0.6
B 1.5 0.5
Reading columns with a MultiIndex#
By specifying list of row locations for the header argument, you
can read in a MultiIndex for the columns. Specifying non-consecutive
rows will skip the intervening rows.
In [199]: from pandas._testing import makeCustomDataframe as mkdf
In [200]: df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4)
In [201]: df.to_csv("mi.csv")
In [202]: print(open("mi.csv").read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [203]: pd.read_csv("mi.csv", header=[0, 1, 2, 3], index_col=[0, 1])
Out[203]:
C0 C_l0_g0 C_l0_g1 C_l0_g2
C1 C_l1_g0 C_l1_g1 C_l1_g2
C2 C_l2_g0 C_l2_g1 C_l2_g2
C3 C_l3_g0 C_l3_g1 C_l3_g2
R0 R1
R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2
R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2
R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2
R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2
R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2
read_csv is also able to interpret a more common format
of multi-columns indices.
In [204]: data = ",a,a,a,b,c,c\n,q,r,s,t,u,v\none,1,2,3,4,5,6\ntwo,7,8,9,10,11,12"
In [205]: print(data)
,a,a,a,b,c,c
,q,r,s,t,u,v
one,1,2,3,4,5,6
two,7,8,9,10,11,12
In [206]: with open("mi2.csv", "w") as fh:
.....: fh.write(data)
.....:
In [207]: pd.read_csv("mi2.csv", header=[0, 1], index_col=0)
Out[207]:
a b c
q r s t u v
one 1 2 3 4 5 6
two 7 8 9 10 11 12
Note
If an index_col is not specified (e.g. you don’t have an index, or wrote it
with df.to_csv(..., index=False), then any names on the columns index will
be lost.
Automatically “sniffing” the delimiter#
read_csv is capable of inferring delimited (not necessarily
comma-separated) files, as pandas uses the csv.Sniffer
class of the csv module. For this, you have to specify sep=None.
In [208]: df = pd.DataFrame(np.random.randn(10, 4))
In [209]: df.to_csv("tmp.csv", sep="|")
In [210]: df.to_csv("tmp2.csv", sep=":")
In [211]: pd.read_csv("tmp2.csv", sep=None, engine="python")
Out[211]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914
Reading multiple files to create a single DataFrame#
It’s best to use concat() to combine multiple files.
See the cookbook for an example.
Iterating through files chunk by chunk#
Suppose you wish to iterate through a (potentially very large) file lazily
rather than reading the entire file into memory, such as the following:
In [212]: df = pd.DataFrame(np.random.randn(10, 4))
In [213]: df.to_csv("tmp.csv", sep="|")
In [214]: table = pd.read_csv("tmp.csv", sep="|")
In [215]: table
Out[215]:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
By specifying a chunksize to read_csv, the return
value will be an iterable object of type TextFileReader:
In [216]: with pd.read_csv("tmp.csv", sep="|", chunksize=4) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
Unnamed: 0 0 1 2 3
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
Unnamed: 0 0 1 2 3
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
Changed in version 1.2: read_csv/json/sas return a context-manager when iterating through a file.
Specifying iterator=True will also return the TextFileReader object:
In [217]: with pd.read_csv("tmp.csv", sep="|", iterator=True) as reader:
.....: reader.get_chunk(5)
.....:
Specifying the parser engine#
Pandas currently supports three engines, the C engine, the python engine, and an experimental
pyarrow engine (requires the pyarrow package). In general, the pyarrow engine is fastest
on larger workloads and is equivalent in speed to the C engine on most other workloads.
The python engine tends to be slower than the pyarrow and C engines on most workloads. However,
the pyarrow engine is much less robust than the C engine, which lacks a few features compared to the
Python engine.
Where possible, pandas uses the C parser (specified as engine='c'), but it may fall
back to Python if C-unsupported options are specified.
Currently, options unsupported by the C and pyarrow engines include:
sep other than a single character (e.g. regex separators)
skipfooter
sep=None with delim_whitespace=False
Specifying any of the above options will produce a ParserWarning unless the
python engine is selected explicitly using engine='python'.
Options that are unsupported by the pyarrow engine which are not covered by the list above include:
float_precision
chunksize
comment
nrows
thousands
memory_map
dialect
warn_bad_lines
error_bad_lines
on_bad_lines
delim_whitespace
quoting
lineterminator
converters
decimal
iterator
dayfirst
infer_datetime_format
verbose
skipinitialspace
low_memory
Specifying these options with engine='pyarrow' will raise a ValueError.
Reading/writing remote files#
You can pass in a URL to read or write remote files to many of pandas’ IO
functions - the following example shows reading a CSV file:
df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")
New in version 1.3.0.
A custom header can be sent alongside HTTP(s) requests by passing a dictionary
of header key value mappings to the storage_options keyword argument as shown below:
headers = {"User-Agent": "pandas"}
df = pd.read_csv(
"https://download.bls.gov/pub/time.series/cu/cu.item",
sep="\t",
storage_options=headers
)
All URLs which are not local files or HTTP(s) are handled by
fsspec, if installed, and its various filesystem implementations
(including Amazon S3, Google Cloud, SSH, FTP, webHDFS…).
Some of these implementations will require additional packages to be
installed, for example
S3 URLs require the s3fs library:
df = pd.read_json("s3://pandas-test/adatafile.json")
When dealing with remote storage systems, you might need
extra configuration with environment variables or config files in
special locations. For example, to access data in your S3 bucket,
you will need to define credentials in one of the several ways listed in
the S3Fs documentation. The same is true
for several of the storage backends, and you should follow the links
at fsimpl1 for implementations built into fsspec and fsimpl2
for those not included in the main fsspec
distribution.
You can also pass parameters directly to the backend driver. For example,
if you do not have S3 credentials, you can still access public data by
specifying an anonymous connection, such as
New in version 1.2.0.
pd.read_csv(
"s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/SaKe2013"
"-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"anon": True},
)
fsspec also allows complex URLs, for accessing data in compressed
archives, local caching of files, and more. To locally cache the above
example, you would modify the call to
pd.read_csv(
"simplecache::s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/"
"SaKe2013-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"s3": {"anon": True}},
)
where we specify that the “anon” parameter is meant for the “s3” part of
the implementation, not to the caching implementation. Note that this caches to a temporary
directory for the duration of the session only, but you can also specify
a permanent store.
Writing out data#
Writing to CSV format#
The Series and DataFrame objects have an instance method to_csv which
allows storing the contents of the object as a comma-separated-values file. The
function takes a number of arguments. Only the first is required.
path_or_buf: A string path to the file to write or a file object. If a file object it must be opened with newline=''
sep : Field delimiter for the output file (default “,”)
na_rep: A string representation of a missing value (default ‘’)
float_format: Format string for floating point numbers
columns: Columns to write (default None)
header: Whether to write out the column names (default True)
index: whether to write row (index) names (default True)
index_label: Column label(s) for index column(s) if desired. If None
(default), and header and index are True, then the index names are
used. (A sequence should be given if the DataFrame uses MultiIndex).
mode : Python write mode, default ‘w’
encoding: a string representing the encoding to use if the contents are
non-ASCII, for Python versions prior to 3
lineterminator: Character sequence denoting line end (default os.linesep)
quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric
quotechar: Character used to quote fields (default ‘”’)
doublequote: Control quoting of quotechar in fields (default True)
escapechar: Character used to escape sep and quotechar when
appropriate (default None)
chunksize: Number of rows to write at a time
date_format: Format string for datetime objects
Writing a formatted string#
The DataFrame object has an instance method to_string which allows control
over the string representation of the object. All arguments are optional:
buf default None, for example a StringIO object
columns default None, which columns to write
col_space default None, minimum width of each column.
na_rep default NaN, representation of NA value
formatters default None, a dictionary (by column) of functions each of
which takes a single argument and returns a formatted string
float_format default None, a function which takes a single (float)
argument and returns a formatted string; to be applied to floats in the
DataFrame.
sparsify default True, set to False for a DataFrame with a hierarchical
index to print every MultiIndex key at each row.
index_names default True, will print the names of the indices
index default True, will print the index (ie, row labels)
header default True, will print the column labels
justify default left, will print column headers left- or
right-justified
The Series object also has a to_string method, but with only the buf,
na_rep, float_format arguments. There is also a length argument
which, if set to True, will additionally output the length of the Series.
JSON#
Read and write JSON format files and strings.
Writing JSON#
A Series or DataFrame can be converted to a valid JSON string. Use to_json
with optional parameters:
path_or_buf : the pathname or buffer to write the output
This can be None in which case a JSON string is returned
orient :
Series:
default is index
allowed values are {split, records, index}
DataFrame:
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
date_format : string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601.
double_precision : The number of decimal places to use when encoding floating point values, default 10.
force_ascii : force encoded string to be ASCII, default True.
date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or ‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.
default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
lines : If records orient, then will write each record per line as json.
Note NaN’s, NaT’s and None will be converted to null and datetime objects will be converted based on the date_format and date_unit parameters.
In [218]: dfj = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [219]: json = dfj.to_json()
In [220]: json
Out[220]: '{"A":{"0":-0.1213062281,"1":0.6957746499,"2":0.9597255933,"3":-0.6199759194,"4":-0.7323393705},"B":{"0":-0.0978826728,"1":0.3417343559,"2":-1.1103361029,"3":0.1497483186,"4":0.6877383895}}'
Orient options#
There are a number of different options for the format of the resulting JSON
file / string. Consider the following DataFrame and Series:
In [221]: dfjo = pd.DataFrame(
.....: dict(A=range(1, 4), B=range(4, 7), C=range(7, 10)),
.....: columns=list("ABC"),
.....: index=list("xyz"),
.....: )
.....:
In [222]: dfjo
Out[222]:
A B C
x 1 4 7
y 2 5 8
z 3 6 9
In [223]: sjo = pd.Series(dict(x=15, y=16, z=17), name="D")
In [224]: sjo
Out[224]:
x 15
y 16
z 17
Name: D, dtype: int64
Column oriented (the default for DataFrame) serializes the data as
nested JSON objects with column labels acting as the primary index:
In [225]: dfjo.to_json(orient="columns")
Out[225]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}'
# Not available for Series
Index oriented (the default for Series) similar to column oriented
but the index labels are now primary:
In [226]: dfjo.to_json(orient="index")
Out[226]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}'
In [227]: sjo.to_json(orient="index")
Out[227]: '{"x":15,"y":16,"z":17}'
Record oriented serializes the data to a JSON array of column -> value records,
index labels are not included. This is useful for passing DataFrame data to plotting
libraries, for example the JavaScript library d3.js:
In [228]: dfjo.to_json(orient="records")
Out[228]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]'
In [229]: sjo.to_json(orient="records")
Out[229]: '[15,16,17]'
Value oriented is a bare-bones option which serializes to nested JSON arrays of
values only, column and index labels are not included:
In [230]: dfjo.to_json(orient="values")
Out[230]: '[[1,4,7],[2,5,8],[3,6,9]]'
# Not available for Series
Split oriented serializes to a JSON object containing separate entries for
values, index and columns. Name is also included for Series:
In [231]: dfjo.to_json(orient="split")
Out[231]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,6,9]]}'
In [232]: sjo.to_json(orient="split")
Out[232]: '{"name":"D","index":["x","y","z"],"data":[15,16,17]}'
Table oriented serializes to the JSON Table Schema, allowing for the
preservation of metadata including but not limited to dtypes and index names.
Note
Any orient option that encodes to a JSON object will not preserve the ordering of
index and column labels during round-trip serialization. If you wish to preserve
label ordering use the split option as it uses ordered containers.
Date handling#
Writing in ISO date format:
In [233]: dfd = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [234]: dfd["date"] = pd.Timestamp("20130101")
In [235]: dfd = dfd.sort_index(axis=1, ascending=False)
In [236]: json = dfd.to_json(date_format="iso")
In [237]: json
Out[237]: '{"date":{"0":"2013-01-01T00:00:00.000","1":"2013-01-01T00:00:00.000","2":"2013-01-01T00:00:00.000","3":"2013-01-01T00:00:00.000","4":"2013-01-01T00:00:00.000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing in ISO date format, with microseconds:
In [238]: json = dfd.to_json(date_format="iso", date_unit="us")
In [239]: json
Out[239]: '{"date":{"0":"2013-01-01T00:00:00.000000","1":"2013-01-01T00:00:00.000000","2":"2013-01-01T00:00:00.000000","3":"2013-01-01T00:00:00.000000","4":"2013-01-01T00:00:00.000000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Epoch timestamps, in seconds:
In [240]: json = dfd.to_json(date_format="epoch", date_unit="s")
In [241]: json
Out[241]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4":1356998400},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing to a file, with a date index and a date column:
In [242]: dfj2 = dfj.copy()
In [243]: dfj2["date"] = pd.Timestamp("20130101")
In [244]: dfj2["ints"] = list(range(5))
In [245]: dfj2["bools"] = True
In [246]: dfj2.index = pd.date_range("20130101", periods=5)
In [247]: dfj2.to_json("test.json")
In [248]: with open("test.json") as fh:
.....: print(fh.read())
.....:
{"A":{"1356998400000":-0.1213062281,"1357084800000":0.6957746499,"1357171200000":0.9597255933,"1357257600000":-0.6199759194,"1357344000000":-0.7323393705},"B":{"1356998400000":-0.0978826728,"1357084800000":0.3417343559,"1357171200000":-1.1103361029,"1357257600000":0.1497483186,"1357344000000":0.6877383895},"date":{"1356998400000":1356998400000,"1357084800000":1356998400000,"1357171200000":1356998400000,"1357257600000":1356998400000,"1357344000000":1356998400000},"ints":{"1356998400000":0,"1357084800000":1,"1357171200000":2,"1357257600000":3,"1357344000000":4},"bools":{"1356998400000":true,"1357084800000":true,"1357171200000":true,"1357257600000":true,"1357344000000":true}}
Fallback behavior#
If the JSON serializer cannot handle the container contents directly it will
fall back in the following manner:
if the dtype is unsupported (e.g. np.complex_) then the default_handler, if provided, will be called
for each value, otherwise an exception is raised.
if an object is unsupported it will attempt the following:
check if the object has defined a toDict method and call it.
A toDict method should return a dict which will then be JSON serialized.
invoke the default_handler if one was provided.
convert the object to a dict by traversing its contents. However this will often fail
with an OverflowError or give unexpected results.
In general the best approach for unsupported objects or dtypes is to provide a default_handler.
For example:
>>> DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json() # raises
RuntimeError: Unhandled numpy dtype 15
can be dealt with by specifying a simple default_handler:
In [249]: pd.DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json(default_handler=str)
Out[249]: '{"0":{"0":"(1+0j)","1":"(2+0j)","2":"(1+2j)"}}'
Reading JSON#
Reading a JSON string to pandas object can take a number of parameters.
The parser will try to parse a DataFrame if typ is not supplied or
is None. To explicitly force Series parsing, pass typ=series
filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be
a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host
is expected. For instance, a local file could be
file ://localhost/path/to/table.json
typ : type of object to recover (series or frame), default ‘frame’
orient :
Series :
default is index
allowed values are {split, records, index}
DataFrame
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at all, default is True, apply only to the data.
convert_axes : boolean, try to convert the axes to the proper dtypes, default is True
convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default is True.
keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns.
numpy : direct decoding to NumPy arrays. default is False;
Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True.
precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
date_unit : string, the timestamp unit to detect if converting dates. Default
None. By default the timestamp precision will be detected, if this is not desired
then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp precision to
seconds, milliseconds, microseconds or nanoseconds respectively.
lines : reads file as one json object per line.
encoding : The encoding to use to decode py3 bytes.
chunksize : when used in combination with lines=True, return a JsonReader which reads in chunksize lines per iteration.
The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.
If a non-default orient was used when encoding to JSON be sure to pass the same
option here so that decoding produces sensible results, see Orient Options for an
overview.
Data conversion#
The default of convert_axes=True, dtype=True, and convert_dates=True
will try to parse the axes, and all of the data into appropriate types,
including dates. If you need to override specific dtypes, pass a dict to
dtype. convert_axes should only be set to False if you need to
preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.
Note
Large integer values may be converted to dates if convert_dates=True and the data and / or column labels appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label meets one of the following criteria:
it ends with '_at'
it ends with '_time'
it begins with 'timestamp'
it is 'modified'
it is 'date'
Warning
When reading JSON data, automatic coercing into dtypes has some quirks:
an index can be reconstructed in a different order from serialization, that is, the returned order is not guaranteed to be the same as before serialization
a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.
bool columns will be converted to integer on reconstruction
Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.
Reading from a JSON string:
In [250]: pd.read_json(json)
Out[250]:
date B A
0 2013-01-01 0.403310 0.176444
1 2013-01-01 0.301624 -0.154951
2 2013-01-01 -1.369849 -2.179861
3 2013-01-01 1.462696 -0.954208
4 2013-01-01 -0.826591 -1.743161
Reading from a file:
In [251]: pd.read_json("test.json")
Out[251]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
Don’t convert any data (but still convert axes and dates):
In [252]: pd.read_json("test.json", dtype=object).dtypes
Out[252]:
A object
B object
date object
ints object
bools object
dtype: object
Specify dtypes for conversion:
In [253]: pd.read_json("test.json", dtype={"A": "float32", "bools": "int8"}).dtypes
Out[253]:
A float32
B float64
date datetime64[ns]
ints int64
bools int8
dtype: object
Preserve string indices:
In [254]: si = pd.DataFrame(
.....: np.zeros((4, 4)), columns=list(range(4)), index=[str(i) for i in range(4)]
.....: )
.....:
In [255]: si
Out[255]:
0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
In [256]: si.index
Out[256]: Index(['0', '1', '2', '3'], dtype='object')
In [257]: si.columns
Out[257]: Int64Index([0, 1, 2, 3], dtype='int64')
In [258]: json = si.to_json()
In [259]: sij = pd.read_json(json, convert_axes=False)
In [260]: sij
Out[260]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
In [261]: sij.index
Out[261]: Index(['0', '1', '2', '3'], dtype='object')
In [262]: sij.columns
Out[262]: Index(['0', '1', '2', '3'], dtype='object')
Dates written in nanoseconds need to be read back in nanoseconds:
In [263]: json = dfj2.to_json(date_unit="ns")
# Try to parse timestamps as milliseconds -> Won't Work
In [264]: dfju = pd.read_json(json, date_unit="ms")
In [265]: dfju
Out[265]:
A B date ints bools
1356998400000000000 -0.121306 -0.097883 1356998400000000000 0 True
1357084800000000000 0.695775 0.341734 1356998400000000000 1 True
1357171200000000000 0.959726 -1.110336 1356998400000000000 2 True
1357257600000000000 -0.619976 0.149748 1356998400000000000 3 True
1357344000000000000 -0.732339 0.687738 1356998400000000000 4 True
# Let pandas detect the correct precision
In [266]: dfju = pd.read_json(json)
In [267]: dfju
Out[267]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
# Or specify that all timestamps are in nanoseconds
In [268]: dfju = pd.read_json(json, date_unit="ns")
In [269]: dfju
Out[269]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
The Numpy parameter#
Note
This param has been deprecated as of version 1.0.0 and will raise a FutureWarning.
This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If numpy=True is passed to read_json an attempt will be made to sniff
an appropriate dtype during deserialization and to subsequently decode directly
to NumPy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric
data:
In [270]: randfloats = np.random.uniform(-100, 1000, 10000)
In [271]: randfloats.shape = (1000, 10)
In [272]: dffloats = pd.DataFrame(randfloats, columns=list("ABCDEFGHIJ"))
In [273]: jsonfloats = dffloats.to_json()
In [274]: %timeit pd.read_json(jsonfloats)
7.91 ms +- 77.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [275]: %timeit pd.read_json(jsonfloats, numpy=True)
5.71 ms +- 333 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
The speedup is less noticeable for smaller datasets:
In [276]: jsonfloats = dffloats.head(100).to_json()
In [277]: %timeit pd.read_json(jsonfloats)
4.46 ms +- 25.9 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [278]: %timeit pd.read_json(jsonfloats, numpy=True)
4.09 ms +- 32.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Warning
Direct NumPy decoding makes a number of assumptions and may fail or produce
unexpected output if these assumptions are not satisfied:
data is numeric.
data is uniform. The dtype is sniffed from the first value decoded.
A ValueError may be raised, or incorrect output may be produced
if this condition is not satisfied.
labels are ordered. Labels are only read from the first container, it is assumed
that each subsequent row / column has been encoded in the same order. This should be satisfied if the
data was encoded using to_json but may not be the case if the JSON
is from another source.
Normalization#
pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data
into a flat table.
In [279]: data = [
.....: {"id": 1, "name": {"first": "Coleen", "last": "Volk"}},
.....: {"name": {"given": "Mark", "family": "Regner"}},
.....: {"id": 2, "name": "Faye Raker"},
.....: ]
.....:
In [280]: pd.json_normalize(data)
Out[280]:
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mark Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker
In [281]: data = [
.....: {
.....: "state": "Florida",
.....: "shortname": "FL",
.....: "info": {"governor": "Rick Scott"},
.....: "county": [
.....: {"name": "Dade", "population": 12345},
.....: {"name": "Broward", "population": 40000},
.....: {"name": "Palm Beach", "population": 60000},
.....: ],
.....: },
.....: {
.....: "state": "Ohio",
.....: "shortname": "OH",
.....: "info": {"governor": "John Kasich"},
.....: "county": [
.....: {"name": "Summit", "population": 1234},
.....: {"name": "Cuyahoga", "population": 1337},
.....: ],
.....: },
.....: ]
.....:
In [282]: pd.json_normalize(data, "county", ["state", "shortname", ["info", "governor"]])
Out[282]:
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
The max_level parameter provides more control over which level to end normalization.
With max_level=1 the following snippet normalizes until 1st nesting level of the provided dict.
In [283]: data = [
.....: {
.....: "CreatedBy": {"Name": "User001"},
.....: "Lookup": {
.....: "TextField": "Some text",
.....: "UserField": {"Id": "ID001", "Name": "Name001"},
.....: },
.....: "Image": {"a": "b"},
.....: }
.....: ]
.....:
In [284]: pd.json_normalize(data, max_level=1)
Out[284]:
CreatedBy.Name Lookup.TextField Lookup.UserField Image.a
0 User001 Some text {'Id': 'ID001', 'Name': 'Name001'} b
Line delimited json#
pandas is able to read and write line-delimited json files that are common in data processing pipelines
using Hadoop or Spark.
For line-delimited json files, pandas can also return an iterator which reads in chunksize lines at a time. This can be useful for large files or to read from a stream.
In [285]: jsonl = """
.....: {"a": 1, "b": 2}
.....: {"a": 3, "b": 4}
.....: """
.....:
In [286]: df = pd.read_json(jsonl, lines=True)
In [287]: df
Out[287]:
a b
0 1 2
1 3 4
In [288]: df.to_json(orient="records", lines=True)
Out[288]: '{"a":1,"b":2}\n{"a":3,"b":4}\n'
# reader is an iterator that returns ``chunksize`` lines each iteration
In [289]: with pd.read_json(StringIO(jsonl), lines=True, chunksize=1) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Empty DataFrame
Columns: []
Index: []
a b
0 1 2
a b
1 3 4
Table schema#
Table Schema is a spec for describing tabular datasets as a JSON
object. The JSON includes information on the field names, types, and
other attributes. You can use the orient table to build
a JSON string with two fields, schema and data.
In [290]: df = pd.DataFrame(
.....: {
.....: "A": [1, 2, 3],
.....: "B": ["a", "b", "c"],
.....: "C": pd.date_range("2016-01-01", freq="d", periods=3),
.....: },
.....: index=pd.Index(range(3), name="idx"),
.....: )
.....:
In [291]: df
Out[291]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
In [292]: df.to_json(orient="table", date_format="iso")
Out[292]: '{"schema":{"fields":[{"name":"idx","type":"integer"},{"name":"A","type":"integer"},{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],"pandas_version":"1.4.0"},"data":[{"idx":0,"A":1,"B":"a","C":"2016-01-01T00:00:00.000"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000"},{"idx":2,"A":3,"B":"c","C":"2016-01-03T00:00:00.000"}]}'
The schema field contains the fields key, which itself contains
a list of column name to type pairs, including the Index or MultiIndex
(see below for a list of types).
The schema field also contains a primaryKey field if the (Multi)index
is unique.
The second field, data, contains the serialized data with the records
orient.
The index is included, and any datetimes are ISO 8601 formatted, as required
by the Table Schema spec.
The full list of types supported are described in the Table Schema
spec. This table shows the mapping from pandas types:
pandas type
Table Schema type
int64
integer
float64
number
bool
boolean
datetime64[ns]
datetime
timedelta64[ns]
duration
categorical
any
object
str
A few notes on the generated table schema:
The schema object contains a pandas_version field. This contains
the version of pandas’ dialect of the schema, and will be incremented
with each revision.
All dates are converted to UTC when serializing. Even timezone naive values,
which are treated as UTC with an offset of 0.
In [293]: from pandas.io.json import build_table_schema
In [294]: s = pd.Series(pd.date_range("2016", periods=4))
In [295]: build_table_schema(s)
Out[295]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
datetimes with a timezone (before serializing), include an additional field
tz with the time zone name (e.g. 'US/Central').
In [296]: s_tz = pd.Series(pd.date_range("2016", periods=12, tz="US/Central"))
In [297]: build_table_schema(s_tz)
Out[297]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Periods are converted to timestamps before serialization, and so have the
same behavior of being converted to UTC. In addition, periods will contain
and additional field freq with the period’s frequency, e.g. 'A-DEC'.
In [298]: s_per = pd.Series(1, index=pd.period_range("2016", freq="A-DEC", periods=4))
In [299]: build_table_schema(s_per)
Out[299]:
{'fields': [{'name': 'index', 'type': 'datetime', 'freq': 'A-DEC'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Categoricals use the any type and an enum constraint listing
the set of possible values. Additionally, an ordered field is included:
In [300]: s_cat = pd.Series(pd.Categorical(["a", "b", "a"]))
In [301]: build_table_schema(s_cat)
Out[301]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values',
'type': 'any',
'constraints': {'enum': ['a', 'b']},
'ordered': False}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
A primaryKey field, containing an array of labels, is included
if the index is unique:
In [302]: s_dupe = pd.Series([1, 2], index=[1, 1])
In [303]: build_table_schema(s_dupe)
Out[303]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '1.4.0'}
The primaryKey behavior is the same with MultiIndexes, but in this
case the primaryKey is an array:
In [304]: s_multi = pd.Series(1, index=pd.MultiIndex.from_product([("a", "b"), (0, 1)]))
In [305]: build_table_schema(s_multi)
Out[305]:
{'fields': [{'name': 'level_0', 'type': 'string'},
{'name': 'level_1', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': FrozenList(['level_0', 'level_1']),
'pandas_version': '1.4.0'}
The default naming roughly follows these rules:
For series, the object.name is used. If that’s none, then the
name is values
For DataFrames, the stringified version of the column name is used
For Index (not MultiIndex), index.name is used, with a
fallback to index if that is None.
For MultiIndex, mi.names is used. If any level has no name,
then level_<i> is used.
read_json also accepts orient='table' as an argument. This allows for
the preservation of metadata such as dtypes and index names in a
round-trippable manner.
In [306]: df = pd.DataFrame(
.....: {
.....: "foo": [1, 2, 3, 4],
.....: "bar": ["a", "b", "c", "d"],
.....: "baz": pd.date_range("2018-01-01", freq="d", periods=4),
.....: "qux": pd.Categorical(["a", "b", "c", "c"]),
.....: },
.....: index=pd.Index(range(4), name="idx"),
.....: )
.....:
In [307]: df
Out[307]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [308]: df.dtypes
Out[308]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
In [309]: df.to_json("test.json", orient="table")
In [310]: new_df = pd.read_json("test.json", orient="table")
In [311]: new_df
Out[311]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [312]: new_df.dtypes
Out[312]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
Please note that the literal string ‘index’ as the name of an Index
is not round-trippable, nor are any names beginning with 'level_' within a
MultiIndex. These are used by default in DataFrame.to_json() to
indicate missing values and the subsequent read cannot distinguish the intent.
In [313]: df.index.name = "index"
In [314]: df.to_json("test.json", orient="table")
In [315]: new_df = pd.read_json("test.json", orient="table")
In [316]: print(new_df.index.name)
None
When using orient='table' along with user-defined ExtensionArray,
the generated schema will contain an additional extDtype key in the respective
fields element. This extra key is not standard but does enable JSON roundtrips
for extension types (e.g. read_json(df.to_json(orient="table"), orient="table")).
The extDtype key carries the name of the extension, if you have properly registered
the ExtensionDtype, pandas will use said name to perform a lookup into the registry
and re-convert the serialized data into your custom dtype.
HTML#
Reading HTML content#
Warning
We highly encourage you to read the HTML Table Parsing gotchas
below regarding the issues surrounding the BeautifulSoup4/html5lib/lxml parsers.
The top-level read_html() function can accept an HTML
string/file/URL and will parse HTML tables into list of pandas DataFrames.
Let’s look at a few examples.
Note
read_html returns a list of DataFrame objects, even if there is
only a single table contained in the HTML content.
Read a URL with no options:
In [320]: "https://www.fdic.gov/resources/resolutions/bank-failures/failed-bank-list"
In [321]: pd.read_html(url)
Out[321]:
[ Bank NameBank CityCity StateSt ... Acquiring InstitutionAI Closing DateClosing FundFund
0 Almena State Bank Almena KS ... Equity Bank October 23, 2020 10538
1 First City Bank of Florida Fort Walton Beach FL ... United Fidelity Bank, fsb October 16, 2020 10537
2 The First State Bank Barboursville WV ... MVB Bank, Inc. April 3, 2020 10536
3 Ericson State Bank Ericson NE ... Farmers and Merchants Bank February 14, 2020 10535
4 City National Bank of New Jersey Newark NJ ... Industrial Bank November 1, 2019 10534
.. ... ... ... ... ... ... ...
558 Superior Bank, FSB Hinsdale IL ... Superior Federal, FSB July 27, 2001 6004
559 Malta National Bank Malta OH ... North Valley Bank May 3, 2001 4648
560 First Alliance Bank & Trust Co. Manchester NH ... Southern New Hampshire Bank & Trust February 2, 2001 4647
561 National State Bank of Metropolis Metropolis IL ... Banterra Bank of Marion December 14, 2000 4646
562 Bank of Honolulu Honolulu HI ... Bank of the Orient October 13, 2000 4645
[563 rows x 7 columns]]
Note
The data from the above URL changes every Monday so the resulting data above may be slightly different.
Read in the content of the file from the above URL and pass it to read_html
as a string:
In [317]: html_str = """
.....: <table>
.....: <tr>
.....: <th>A</th>
.....: <th colspan="1">B</th>
.....: <th rowspan="1">C</th>
.....: </tr>
.....: <tr>
.....: <td>a</td>
.....: <td>b</td>
.....: <td>c</td>
.....: </tr>
.....: </table>
.....: """
.....:
In [318]: with open("tmp.html", "w") as f:
.....: f.write(html_str)
.....:
In [319]: df = pd.read_html("tmp.html")
In [320]: df[0]
Out[320]:
A B C
0 a b c
You can even pass in an instance of StringIO if you so desire:
In [321]: dfs = pd.read_html(StringIO(html_str))
In [322]: dfs[0]
Out[322]:
A B C
0 a b c
Note
The following examples are not run by the IPython evaluator due to the fact
that having so many network-accessing functions slows down the documentation
build. If you spot an error or an example that doesn’t run, please do not
hesitate to report it over on pandas GitHub issues page.
Read a URL and match a table that contains specific text:
match = "Metcalf Bank"
df_list = pd.read_html(url, match=match)
Specify a header row (by default <th> or <td> elements located within a
<thead> are used to form the column index, if multiple rows are contained within
<thead> then a MultiIndex is created); if specified, the header row is taken
from the data minus the parsed header elements (<th> elements).
dfs = pd.read_html(url, header=0)
Specify an index column:
dfs = pd.read_html(url, index_col=0)
Specify a number of rows to skip:
dfs = pd.read_html(url, skiprows=0)
Specify a number of rows to skip using a list (range works
as well):
dfs = pd.read_html(url, skiprows=range(2))
Specify an HTML attribute:
dfs1 = pd.read_html(url, attrs={"id": "table"})
dfs2 = pd.read_html(url, attrs={"class": "sortable"})
print(np.array_equal(dfs1[0], dfs2[0])) # Should be True
Specify values that should be converted to NaN:
dfs = pd.read_html(url, na_values=["No Acquirer"])
Specify whether to keep the default set of NaN values:
dfs = pd.read_html(url, keep_default_na=False)
Specify converters for columns. This is useful for numerical text data that has
leading zeros. By default columns that are numerical are cast to numeric
types and the leading zeros are lost. To avoid this, we can convert these
columns to strings.
url_mcc = "https://en.wikipedia.org/wiki/Mobile_country_code"
dfs = pd.read_html(
url_mcc,
match="Telekom Albania",
header=0,
converters={"MNC": str},
)
Use some combination of the above:
dfs = pd.read_html(url, match="Metcalf Bank", index_col=0)
Read in pandas to_html output (with some loss of floating point precision):
df = pd.DataFrame(np.random.randn(2, 2))
s = df.to_html(float_format="{0:.40g}".format)
dfin = pd.read_html(s, index_col=0)
The lxml backend will raise an error on a failed parse if that is the only
parser you provide. If you only have a single parser you can provide just a
string, but it is considered good practice to pass a list with one string if,
for example, the function expects a sequence of strings. You may use:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml"])
Or you could pass flavor='lxml' without a list:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor="lxml")
However, if you have bs4 and html5lib installed and pass None or ['lxml',
'bs4'] then the parse will most likely succeed. Note that as soon as a parse
succeeds, the function will return.
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml", "bs4"])
Links can be extracted from cells along with the text using extract_links="all".
In [323]: html_table = """
.....: <table>
.....: <tr>
.....: <th>GitHub</th>
.....: </tr>
.....: <tr>
.....: <td><a href="https://github.com/pandas-dev/pandas">pandas</a></td>
.....: </tr>
.....: </table>
.....: """
.....:
In [324]: df = pd.read_html(
.....: html_table,
.....: extract_links="all"
.....: )[0]
.....:
In [325]: df
Out[325]:
(GitHub, None)
0 (pandas, https://github.com/pandas-dev/pandas)
In [326]: df[("GitHub", None)]
Out[326]:
0 (pandas, https://github.com/pandas-dev/pandas)
Name: (GitHub, None), dtype: object
In [327]: df[("GitHub", None)].str[1]
Out[327]:
0 https://github.com/pandas-dev/pandas
Name: (GitHub, None), dtype: object
New in version 1.5.0.
Writing to HTML files#
DataFrame objects have an instance method to_html which renders the
contents of the DataFrame as an HTML table. The function arguments are as
in the method to_string described above.
Note
Not all of the possible options for DataFrame.to_html are shown here for
brevity’s sake. See to_html() for the
full set of options.
Note
In an HTML-rendering supported environment like a Jupyter Notebook, display(HTML(...))`
will render the raw HTML into the environment.
In [328]: from IPython.display import display, HTML
In [329]: df = pd.DataFrame(np.random.randn(2, 2))
In [330]: df
Out[330]:
0 1
0 0.070319 1.773907
1 0.253908 0.414581
In [331]: html = df.to_html()
In [332]: print(html) # raw html
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [333]: display(HTML(html))
<IPython.core.display.HTML object>
The columns argument will limit the columns shown:
In [334]: html = df.to_html(columns=[0])
In [335]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
</tr>
</tbody>
</table>
In [336]: display(HTML(html))
<IPython.core.display.HTML object>
float_format takes a Python callable to control the precision of floating
point values:
In [337]: html = df.to_html(float_format="{0:.10f}".format)
In [338]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0703192665</td>
<td>1.7739074228</td>
</tr>
<tr>
<th>1</th>
<td>0.2539083433</td>
<td>0.4145805920</td>
</tr>
</tbody>
</table>
In [339]: display(HTML(html))
<IPython.core.display.HTML object>
bold_rows will make the row labels bold by default, but you can turn that
off:
In [340]: html = df.to_html(bold_rows=False)
In [341]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<td>1</td>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [342]: display(HTML(html))
<IPython.core.display.HTML object>
The classes argument provides the ability to give the resulting HTML
table CSS classes. Note that these classes are appended to the existing
'dataframe' class.
In [343]: print(df.to_html(classes=["awesome_table_class", "even_more_awesome_class"]))
<table border="1" class="dataframe awesome_table_class even_more_awesome_class">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
The render_links argument provides the ability to add hyperlinks to cells
that contain URLs.
In [344]: url_df = pd.DataFrame(
.....: {
.....: "name": ["Python", "pandas"],
.....: "url": ["https://www.python.org/", "https://pandas.pydata.org"],
.....: }
.....: )
.....:
In [345]: html = url_df.to_html(render_links=True)
In [346]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>url</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Python</td>
<td><a href="https://www.python.org/" target="_blank">https://www.python.org/</a></td>
</tr>
<tr>
<th>1</th>
<td>pandas</td>
<td><a href="https://pandas.pydata.org" target="_blank">https://pandas.pydata.org</a></td>
</tr>
</tbody>
</table>
In [347]: display(HTML(html))
<IPython.core.display.HTML object>
Finally, the escape argument allows you to control whether the
“<”, “>” and “&” characters escaped in the resulting HTML (by default it is
True). So to get the HTML without escaped characters pass escape=False
In [348]: df = pd.DataFrame({"a": list("&<>"), "b": np.random.randn(3)})
Escaped:
In [349]: html = df.to_html()
In [350]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [351]: display(HTML(html))
<IPython.core.display.HTML object>
Not escaped:
In [352]: html = df.to_html(escape=False)
In [353]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [354]: display(HTML(html))
<IPython.core.display.HTML object>
Note
Some browsers may not show a difference in the rendering of the previous two
HTML tables.
HTML Table Parsing Gotchas#
There are some versioning issues surrounding the libraries that are used to
parse HTML tables in the top-level pandas io function read_html.
Issues with lxml
Benefits
lxml is very fast.
lxml requires Cython to install correctly.
Drawbacks
lxml does not make any guarantees about the results of its parse
unless it is given strictly valid markup.
In light of the above, we have chosen to allow you, the user, to use the
lxml backend, but this backend will use html5lib if lxml
fails to parse
It is therefore highly recommended that you install both
BeautifulSoup4 and html5lib, so that you will still get a valid
result (provided everything else is valid) even if lxml fails.
Issues with BeautifulSoup4 using lxml as a backend
The above issues hold here as well since BeautifulSoup4 is essentially
just a wrapper around a parser backend.
Issues with BeautifulSoup4 using html5lib as a backend
Benefits
html5lib is far more lenient than lxml and consequently deals
with real-life markup in a much saner way rather than just, e.g.,
dropping an element without notifying you.
html5lib generates valid HTML5 markup from invalid markup
automatically. This is extremely important for parsing HTML tables,
since it guarantees a valid document. However, that does NOT mean that
it is “correct”, since the process of fixing markup does not have a
single definition.
html5lib is pure Python and requires no additional build steps beyond
its own installation.
Drawbacks
The biggest drawback to using html5lib is that it is slow as
molasses. However consider the fact that many tables on the web are not
big enough for the parsing algorithm runtime to matter. It is more
likely that the bottleneck will be in the process of reading the raw
text from the URL over the web, i.e., IO (input-output). For very large
tables, this might not be true.
LaTeX#
New in version 1.3.0.
Currently there are no methods to read from LaTeX, only output methods.
Writing to LaTeX files#
Note
DataFrame and Styler objects currently have a to_latex method. We recommend
using the Styler.to_latex() method
over DataFrame.to_latex() due to the former’s greater flexibility with
conditional styling, and the latter’s possible future deprecation.
Review the documentation for Styler.to_latex,
which gives examples of conditional styling and explains the operation of its keyword
arguments.
For simple application the following pattern is sufficient.
In [355]: df = pd.DataFrame([[1, 2], [3, 4]], index=["a", "b"], columns=["c", "d"])
In [356]: print(df.style.to_latex())
\begin{tabular}{lrr}
& c & d \\
a & 1 & 2 \\
b & 3 & 4 \\
\end{tabular}
To format values before output, chain the Styler.format
method.
In [357]: print(df.style.format("€ {}").to_latex())
\begin{tabular}{lrr}
& c & d \\
a & € 1 & € 2 \\
b & € 3 & € 4 \\
\end{tabular}
XML#
Reading XML#
New in version 1.3.0.
The top-level read_xml() function can accept an XML
string/file/URL and will parse nodes and attributes into a pandas DataFrame.
Note
Since there is no standard XML structure where design types can vary in
many ways, read_xml works best with flatter, shallow versions. If
an XML document is deeply nested, use the stylesheet feature to
transform XML into a flatter version.
Let’s look at a few examples.
Read an XML string:
In [358]: xml = """<?xml version="1.0" encoding="UTF-8"?>
.....: <bookstore>
.....: <book category="cooking">
.....: <title lang="en">Everyday Italian</title>
.....: <author>Giada De Laurentiis</author>
.....: <year>2005</year>
.....: <price>30.00</price>
.....: </book>
.....: <book category="children">
.....: <title lang="en">Harry Potter</title>
.....: <author>J K. Rowling</author>
.....: <year>2005</year>
.....: <price>29.99</price>
.....: </book>
.....: <book category="web">
.....: <title lang="en">Learning XML</title>
.....: <author>Erik T. Ray</author>
.....: <year>2003</year>
.....: <price>39.95</price>
.....: </book>
.....: </bookstore>"""
.....:
In [359]: df = pd.read_xml(xml)
In [360]: df
Out[360]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read a URL with no options:
In [361]: df = pd.read_xml("https://www.w3schools.com/xml/books.xml")
In [362]: df
Out[362]:
category title author year price cover
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 None
1 children Harry Potter J K. Rowling 2005 29.99 None
2 web XQuery Kick Start Vaidyanathan Nagarajan 2003 49.99 None
3 web Learning XML Erik T. Ray 2003 39.95 paperback
Read in the content of the “books.xml” file and pass it to read_xml
as a string:
In [363]: file_path = "books.xml"
In [364]: with open(file_path, "w") as f:
.....: f.write(xml)
.....:
In [365]: with open(file_path, "r") as f:
.....: df = pd.read_xml(f.read())
.....:
In [366]: df
Out[366]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read in the content of the “books.xml” as instance of StringIO or
BytesIO and pass it to read_xml:
In [367]: with open(file_path, "r") as f:
.....: sio = StringIO(f.read())
.....:
In [368]: df = pd.read_xml(sio)
In [369]: df
Out[369]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
In [370]: with open(file_path, "rb") as f:
.....: bio = BytesIO(f.read())
.....:
In [371]: df = pd.read_xml(bio)
In [372]: df
Out[372]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Even read XML from AWS S3 buckets such as NIH NCBI PMC Article Datasets providing
Biomedical and Life Science Jorurnals:
In [373]: df = pd.read_xml(
.....: "s3://pmc-oa-opendata/oa_comm/xml/all/PMC1236943.xml",
.....: xpath=".//journal-meta",
.....: )
.....:
In [374]: df
Out[374]:
journal-id journal-title issn publisher
0 Cardiovasc Ultrasound Cardiovascular Ultrasound 1476-7120 NaN
With lxml as default parser, you access the full-featured XML library
that extends Python’s ElementTree API. One powerful tool is ability to query
nodes selectively or conditionally with more expressive XPath:
In [375]: df = pd.read_xml(file_path, xpath="//book[year=2005]")
In [376]: df
Out[376]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
Specify only elements or only attributes to parse:
In [377]: df = pd.read_xml(file_path, elems_only=True)
In [378]: df
Out[378]:
title author year price
0 Everyday Italian Giada De Laurentiis 2005 30.00
1 Harry Potter J K. Rowling 2005 29.99
2 Learning XML Erik T. Ray 2003 39.95
In [379]: df = pd.read_xml(file_path, attrs_only=True)
In [380]: df
Out[380]:
category
0 cooking
1 children
2 web
XML documents can have namespaces with prefixes and default namespaces without
prefixes both of which are denoted with a special attribute xmlns. In order
to parse by node under a namespace context, xpath must reference a prefix.
For example, below XML contains a namespace with prefix, doc, and URI at
https://example.com. In order to parse doc:row nodes,
namespaces must be used.
In [381]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <doc:data xmlns:doc="https://example.com">
.....: <doc:row>
.....: <doc:shape>square</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides>4.0</doc:sides>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>circle</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides/>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>triangle</doc:shape>
.....: <doc:degrees>180</doc:degrees>
.....: <doc:sides>3.0</doc:sides>
.....: </doc:row>
.....: </doc:data>"""
.....:
In [382]: df = pd.read_xml(xml,
.....: xpath="//doc:row",
.....: namespaces={"doc": "https://example.com"})
.....:
In [383]: df
Out[383]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
Similarly, an XML document can have a default namespace without prefix. Failing
to assign a temporary prefix will return no nodes and raise a ValueError.
But assigning any temporary name to correct URI allows parsing by nodes.
In [384]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <data xmlns="https://example.com">
.....: <row>
.....: <shape>square</shape>
.....: <degrees>360</degrees>
.....: <sides>4.0</sides>
.....: </row>
.....: <row>
.....: <shape>circle</shape>
.....: <degrees>360</degrees>
.....: <sides/>
.....: </row>
.....: <row>
.....: <shape>triangle</shape>
.....: <degrees>180</degrees>
.....: <sides>3.0</sides>
.....: </row>
.....: </data>"""
.....:
In [385]: df = pd.read_xml(xml,
.....: xpath="//pandas:row",
.....: namespaces={"pandas": "https://example.com"})
.....:
In [386]: df
Out[386]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
However, if XPath does not reference node names such as default, /*, then
namespaces is not required.
With lxml as parser, you can flatten nested XML documents with an XSLT
script which also can be string/file/URL types. As background, XSLT is
a special-purpose language written in a special XML file that can transform
original XML documents into other XML, HTML, even text (CSV, JSON, etc.)
using an XSLT processor.
For example, consider this somewhat nested structure of Chicago “L” Rides
where station and rides elements encapsulate data in their own sections.
With below XSLT, lxml can transform original nested document into a flatter
output (as shown below for demonstration) for easier parse into DataFrame:
In [387]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station id="40850" name="Library"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="41700" name="Washington/Wabash"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="40380" name="Clark/Lake"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: </response>"""
.....:
In [388]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/response">
.....: <xsl:copy>
.....: <xsl:apply-templates select="row"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <xsl:copy>
.....: <station_id><xsl:value-of select="station/@id"/></station_id>
.....: <station_name><xsl:value-of select="station/@name"/></station_name>
.....: <xsl:copy-of select="month|rides/*"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [389]: output = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station_id>40850</station_id>
.....: <station_name>Library</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>41700</station_id>
.....: <station_name>Washington/Wabash</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>40380</station_id>
.....: <station_name>Clark/Lake</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </row>
.....: </response>"""
.....:
In [390]: df = pd.read_xml(xml, stylesheet=xsl)
In [391]: df
Out[391]:
station_id station_name ... avg_saturday_rides avg_sunday_holiday_rides
0 40850 Library ... 534.0 417.2
1 41700 Washington/Wabash ... 1909.8 1438.6
2 40380 Clark/Lake ... 1657.0 1453.8
[3 rows x 6 columns]
For very large XML files that can range in hundreds of megabytes to gigabytes, pandas.read_xml()
supports parsing such sizeable files using lxml’s iterparse and etree’s iterparse
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
without holding entire tree in memory.
New in version 1.5.0.
To use this feature, you must pass a physical XML file path into read_xml and use the iterparse argument.
Files should not be compressed or point to online sources but stored on local disk. Also, iterparse should be
a dictionary where the key is the repeating nodes in document (which become the rows) and the value is a list of
any element or attribute that is a descendant (i.e., child, grandchild) of repeating node. Since XPath is not
used in this method, descendants do not need to share same relationship with one another. Below shows example
of reading in Wikipedia’s very large (12 GB+) latest article data dump.
In [1]: df = pd.read_xml(
... "/path/to/downloaded/enwikisource-latest-pages-articles.xml",
... iterparse = {"page": ["title", "ns", "id"]}
... )
... df
Out[2]:
title ns id
0 Gettysburg Address 0 21450
1 Main Page 0 42950
2 Declaration by United Nations 0 8435
3 Constitution of the United States of America 0 8435
4 Declaration of Independence (Israel) 0 17858
... ... ... ...
3578760 Page:Black cat 1897 07 v2 n10.pdf/17 104 219649
3578761 Page:Black cat 1897 07 v2 n10.pdf/43 104 219649
3578762 Page:Black cat 1897 07 v2 n10.pdf/44 104 219649
3578763 The History of Tom Jones, a Foundling/Book IX 0 12084291
3578764 Page:Shakespeare of Stratford (1926) Yale.djvu/91 104 21450
[3578765 rows x 3 columns]
Writing XML#
New in version 1.3.0.
DataFrame objects have an instance method to_xml which renders the
contents of the DataFrame as an XML document.
Note
This method does not support special properties of XML including DTD,
CData, XSD schemas, processing instructions, comments, and others.
Only namespaces at the root level is supported. However, stylesheet
allows design changes after initial output.
Let’s look at a few examples.
Write an XML without options:
In [392]: geom_df = pd.DataFrame(
.....: {
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [393]: print(geom_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with new root and row name:
In [394]: print(geom_df.to_xml(root_name="geometry", row_name="objects"))
<?xml version='1.0' encoding='utf-8'?>
<geometry>
<objects>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</objects>
<objects>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</objects>
<objects>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</objects>
</geometry>
Write an attribute-centric XML:
In [395]: print(geom_df.to_xml(attr_cols=geom_df.columns.tolist()))
<?xml version='1.0' encoding='utf-8'?>
<data>
<row index="0" shape="square" degrees="360" sides="4.0"/>
<row index="1" shape="circle" degrees="360"/>
<row index="2" shape="triangle" degrees="180" sides="3.0"/>
</data>
Write a mix of elements and attributes:
In [396]: print(
.....: geom_df.to_xml(
.....: index=False,
.....: attr_cols=['shape'],
.....: elem_cols=['degrees', 'sides'])
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<data>
<row shape="square">
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row shape="circle">
<degrees>360</degrees>
<sides/>
</row>
<row shape="triangle">
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Any DataFrames with hierarchical columns will be flattened for XML element names
with levels delimited by underscores:
In [397]: ext_geom_df = pd.DataFrame(
.....: {
.....: "type": ["polygon", "other", "polygon"],
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [398]: pvt_df = ext_geom_df.pivot_table(index='shape',
.....: columns='type',
.....: values=['degrees', 'sides'],
.....: aggfunc='sum')
.....:
In [399]: pvt_df
Out[399]:
degrees sides
type other polygon other polygon
shape
circle 360.0 NaN 0.0 NaN
square NaN 360.0 NaN 4.0
triangle NaN 180.0 NaN 3.0
In [400]: print(pvt_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<shape>circle</shape>
<degrees_other>360.0</degrees_other>
<degrees_polygon/>
<sides_other>0.0</sides_other>
<sides_polygon/>
</row>
<row>
<shape>square</shape>
<degrees_other/>
<degrees_polygon>360.0</degrees_polygon>
<sides_other/>
<sides_polygon>4.0</sides_polygon>
</row>
<row>
<shape>triangle</shape>
<degrees_other/>
<degrees_polygon>180.0</degrees_polygon>
<sides_other/>
<sides_polygon>3.0</sides_polygon>
</row>
</data>
Write an XML with default namespace:
In [401]: print(geom_df.to_xml(namespaces={"": "https://example.com"}))
<?xml version='1.0' encoding='utf-8'?>
<data xmlns="https://example.com">
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with namespace prefix:
In [402]: print(
.....: geom_df.to_xml(namespaces={"doc": "https://example.com"},
.....: prefix="doc")
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<doc:data xmlns:doc="https://example.com">
<doc:row>
<doc:index>0</doc:index>
<doc:shape>square</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides>4.0</doc:sides>
</doc:row>
<doc:row>
<doc:index>1</doc:index>
<doc:shape>circle</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides/>
</doc:row>
<doc:row>
<doc:index>2</doc:index>
<doc:shape>triangle</doc:shape>
<doc:degrees>180</doc:degrees>
<doc:sides>3.0</doc:sides>
</doc:row>
</doc:data>
Write an XML without declaration or pretty print:
In [403]: print(
.....: geom_df.to_xml(xml_declaration=False,
.....: pretty_print=False)
.....: )
.....:
<data><row><index>0</index><shape>square</shape><degrees>360</degrees><sides>4.0</sides></row><row><index>1</index><shape>circle</shape><degrees>360</degrees><sides/></row><row><index>2</index><shape>triangle</shape><degrees>180</degrees><sides>3.0</sides></row></data>
Write an XML and transform with stylesheet:
In [404]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/data">
.....: <geometry>
.....: <xsl:apply-templates select="row"/>
.....: </geometry>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <object index="{index}">
.....: <xsl:if test="shape!='circle'">
.....: <xsl:attribute name="type">polygon</xsl:attribute>
.....: </xsl:if>
.....: <xsl:copy-of select="shape"/>
.....: <property>
.....: <xsl:copy-of select="degrees|sides"/>
.....: </property>
.....: </object>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [405]: print(geom_df.to_xml(stylesheet=xsl))
<?xml version="1.0"?>
<geometry>
<object index="0" type="polygon">
<shape>square</shape>
<property>
<degrees>360</degrees>
<sides>4.0</sides>
</property>
</object>
<object index="1">
<shape>circle</shape>
<property>
<degrees>360</degrees>
<sides/>
</property>
</object>
<object index="2" type="polygon">
<shape>triangle</shape>
<property>
<degrees>180</degrees>
<sides>3.0</sides>
</property>
</object>
</geometry>
XML Final Notes#
All XML documents adhere to W3C specifications. Both etree and lxml
parsers will fail to parse any markup document that is not well-formed or
follows XML syntax rules. Do be aware HTML is not an XML document unless it
follows XHTML specs. However, other popular markup types including KML, XAML,
RSS, MusicML, MathML are compliant XML schemas.
For above reason, if your application builds XML prior to pandas operations,
use appropriate DOM libraries like etree and lxml to build the necessary
document and not by string concatenation or regex adjustments. Always remember
XML is a special text file with markup rules.
With very large XML files (several hundred MBs to GBs), XPath and XSLT
can become memory-intensive operations. Be sure to have enough available
RAM for reading and writing to large XML files (roughly about 5 times the
size of text).
Because XSLT is a programming language, use it with caution since such scripts
can pose a security risk in your environment and can run large or infinite
recursive operations. Always test scripts on small fragments before full run.
The etree parser supports all functionality of both read_xml and
to_xml except for complex XPath and any XSLT. Though limited in features,
etree is still a reliable and capable parser and tree builder. Its
performance may trail lxml to a certain degree for larger files but
relatively unnoticeable on small to medium size files.
Excel files#
The read_excel() method can read Excel 2007+ (.xlsx) files
using the openpyxl Python module. Excel 2003 (.xls) files
can be read using xlrd. Binary Excel (.xlsb)
files can be read using pyxlsb.
The to_excel() instance method is used for
saving a DataFrame to Excel. Generally the semantics are
similar to working with csv data.
See the cookbook for some advanced strategies.
Warning
The xlwt package for writing old-style .xls
excel files is no longer maintained.
The xlrd package is now only for reading
old-style .xls files.
Before pandas 1.3.0, the default argument engine=None to read_excel()
would result in using the xlrd engine in many cases, including new
Excel 2007+ (.xlsx) files. pandas will now default to using the
openpyxl engine.
It is strongly encouraged to install openpyxl to read Excel 2007+
(.xlsx) files.
Please do not report issues when using ``xlrd`` to read ``.xlsx`` files.
This is no longer supported, switch to using openpyxl instead.
Attempting to use the xlwt engine will raise a FutureWarning
unless the option io.excel.xls.writer is set to "xlwt".
While this option is now deprecated and will also raise a FutureWarning,
it can be globally set and the warning suppressed. Users are recommended to
write .xlsx files using the openpyxl engine instead.
Reading Excel files#
In the most basic use-case, read_excel takes a path to an Excel
file, and the sheet_name indicating which sheet to parse.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", sheet_name="Sheet1")
ExcelFile class#
To facilitate working with multiple sheets from the same file, the ExcelFile
class can be used to wrap the file and can be passed into read_excel
There will be a performance benefit for reading multiple sheets as the file is
read into memory only once.
xlsx = pd.ExcelFile("path_to_file.xls")
df = pd.read_excel(xlsx, "Sheet1")
The ExcelFile class can also be used as a context manager.
with pd.ExcelFile("path_to_file.xls") as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
The sheet_names property will generate
a list of the sheet names in the file.
The primary use-case for an ExcelFile is parsing multiple sheets with
different parameters:
data = {}
# For when Sheet1's format differs from Sheet2
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=1)
Note that if the same parsing parameters are used for all sheets, a list
of sheet names can simply be passed to read_excel with no loss in performance.
# using the ExcelFile class
data = {}
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=None, na_values=["NA"])
# equivalent using the read_excel function
data = pd.read_excel(
"path_to_file.xls", ["Sheet1", "Sheet2"], index_col=None, na_values=["NA"]
)
ExcelFile can also be called with a xlrd.book.Book object
as a parameter. This allows the user to control how the excel file is read.
For example, sheets can be loaded on demand by calling xlrd.open_workbook()
with on_demand=True.
import xlrd
xlrd_book = xlrd.open_workbook("path_to_file.xls", on_demand=True)
with pd.ExcelFile(xlrd_book) as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
Specifying sheets#
Note
The second argument is sheet_name, not to be confused with ExcelFile.sheet_names.
Note
An ExcelFile’s attribute sheet_names provides access to a list of sheets.
The arguments sheet_name allows specifying the sheet or sheets to read.
The default value for sheet_name is 0, indicating to read the first sheet
Pass a string to refer to the name of a particular sheet in the workbook.
Pass an integer to refer to the index of a sheet. Indices follow Python
convention, beginning at 0.
Pass a list of either strings or integers, to return a dictionary of specified sheets.
Pass a None to return a dictionary of all available sheets.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", "Sheet1", index_col=None, na_values=["NA"])
Using the sheet index:
# Returns a DataFrame
pd.read_excel("path_to_file.xls", 0, index_col=None, na_values=["NA"])
Using all default values:
# Returns a DataFrame
pd.read_excel("path_to_file.xls")
Using None to get all sheets:
# Returns a dictionary of DataFrames
pd.read_excel("path_to_file.xls", sheet_name=None)
Using a list to get multiple sheets:
# Returns the 1st and 4th sheet, as a dictionary of DataFrames.
pd.read_excel("path_to_file.xls", sheet_name=["Sheet1", 3])
read_excel can read more than one sheet, by setting sheet_name to either
a list of sheet names, a list of sheet positions, or None to read all sheets.
Sheets can be specified by sheet index or sheet name, using an integer or string,
respectively.
Reading a MultiIndex#
read_excel can read a MultiIndex index, by passing a list of columns to index_col
and a MultiIndex column by passing a list of rows to header. If either the index
or columns have serialized level names those will be read in as well by specifying
the rows/columns that make up the levels.
For example, to read in a MultiIndex index without names:
In [406]: df = pd.DataFrame(
.....: {"a": [1, 2, 3, 4], "b": [5, 6, 7, 8]},
.....: index=pd.MultiIndex.from_product([["a", "b"], ["c", "d"]]),
.....: )
.....:
In [407]: df.to_excel("path_to_file.xlsx")
In [408]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [409]: df
Out[409]:
a b
a c 1 5
d 2 6
b c 3 7
d 4 8
If the index has level names, they will parsed as well, using the same
parameters.
In [410]: df.index = df.index.set_names(["lvl1", "lvl2"])
In [411]: df.to_excel("path_to_file.xlsx")
In [412]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [413]: df
Out[413]:
a b
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
If the source file has both MultiIndex index and columns, lists specifying each
should be passed to index_col and header:
In [414]: df.columns = pd.MultiIndex.from_product([["a"], ["b", "d"]], names=["c1", "c2"])
In [415]: df.to_excel("path_to_file.xlsx")
In [416]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1], header=[0, 1])
In [417]: df
Out[417]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Missing values in columns specified in index_col will be forward filled to
allow roundtripping with to_excel for merged_cells=True. To avoid forward
filling the missing values use set_index after reading the data instead of
index_col.
Parsing specific columns#
It is often the case that users will insert columns to do temporary computations
in Excel and you may not want to read in those columns. read_excel takes
a usecols keyword to allow you to specify a subset of columns to parse.
Changed in version 1.0.0.
Passing in an integer for usecols will no longer work. Please pass in a list
of ints from 0 to usecols inclusive instead.
You can specify a comma-delimited set of Excel columns and ranges as a string:
pd.read_excel("path_to_file.xls", "Sheet1", usecols="A,C:E")
If usecols is a list of integers, then it is assumed to be the file column
indices to be parsed.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=[0, 2, 3])
Element order is ignored, so usecols=[0, 1] is the same as [1, 0].
If usecols is a list of strings, it is assumed that each string corresponds
to a column name provided either by the user in names or inferred from the
document header row(s). Those strings define which columns will be parsed:
pd.read_excel("path_to_file.xls", "Sheet1", usecols=["foo", "bar"])
Element order is ignored, so usecols=['baz', 'joe'] is the same as ['joe', 'baz'].
If usecols is callable, the callable function will be evaluated against
the column names, returning names where the callable function evaluates to True.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=lambda x: x.isalpha())
Parsing dates#
Datetime-like values are normally automatically converted to the appropriate
dtype when reading the excel file. But if you have a column of strings that
look like dates (but are not actually formatted as dates in excel), you can
use the parse_dates keyword to parse those strings to datetimes:
pd.read_excel("path_to_file.xls", "Sheet1", parse_dates=["date_strings"])
Cell converters#
It is possible to transform the contents of Excel cells via the converters
option. For instance, to convert a column to boolean:
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyBools": bool})
This options handles missing values and treats exceptions in the converters
as missing data. Transformations are applied cell by cell rather than to the
column as a whole, so the array dtype is not guaranteed. For instance, a
column of integers with missing values cannot be transformed to an array
with integer dtype, because NaN is strictly a float. You can manually mask
missing data to recover integer dtype:
def cfun(x):
return int(x) if x else -1
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyInts": cfun})
Dtype specifications#
As an alternative to converters, the type for an entire column can
be specified using the dtype keyword, which takes a dictionary
mapping column names to types. To interpret data with
no type inference, use the type str or object.
pd.read_excel("path_to_file.xls", dtype={"MyInts": "int64", "MyText": str})
Writing Excel files#
Writing Excel files to disk#
To write a DataFrame object to a sheet of an Excel file, you can use the
to_excel instance method. The arguments are largely the same as to_csv
described above, the first argument being the name of the excel file, and the
optional second argument the name of the sheet to which the DataFrame should be
written. For example:
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Files with a .xls extension will be written using xlwt and those with a
.xlsx extension will be written using xlsxwriter (if available) or
openpyxl.
The DataFrame will be written in a way that tries to mimic the REPL output.
The index_label will be placed in the second
row instead of the first. You can place it in the first row by setting the
merge_cells option in to_excel() to False:
df.to_excel("path_to_file.xlsx", index_label="label", merge_cells=False)
In order to write separate DataFrames to separate sheets in a single Excel file,
one can pass an ExcelWriter.
with pd.ExcelWriter("path_to_file.xlsx") as writer:
df1.to_excel(writer, sheet_name="Sheet1")
df2.to_excel(writer, sheet_name="Sheet2")
Writing Excel files to memory#
pandas supports writing Excel files to buffer-like objects such as StringIO or
BytesIO using ExcelWriter.
from io import BytesIO
bio = BytesIO()
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter(bio, engine="xlsxwriter")
df.to_excel(writer, sheet_name="Sheet1")
# Save the workbook
writer.save()
# Seek to the beginning and read to copy the workbook to a variable in memory
bio.seek(0)
workbook = bio.read()
Note
engine is optional but recommended. Setting the engine determines
the version of workbook produced. Setting engine='xlrd' will produce an
Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If
omitted, an Excel 2007-formatted workbook is produced.
Excel writer engines#
Deprecated since version 1.2.0: As the xlwt package is no longer
maintained, the xlwt engine will be removed from a future version
of pandas. This is the only engine in pandas that supports writing to
.xls files.
pandas chooses an Excel writer via two methods:
the engine keyword argument
the filename extension (via the default specified in config options)
By default, pandas uses the XlsxWriter for .xlsx, openpyxl
for .xlsm, and xlwt for .xls files. If you have multiple
engines installed, you can set the default engine through setting the
config options io.excel.xlsx.writer and
io.excel.xls.writer. pandas will fall back on openpyxl for .xlsx
files if Xlsxwriter is not available.
To specify which writer you want to use, you can pass an engine keyword
argument to to_excel and to ExcelWriter. The built-in engines are:
openpyxl: version 2.4 or higher is required
xlsxwriter
xlwt
# By setting the 'engine' in the DataFrame 'to_excel()' methods.
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1", engine="xlsxwriter")
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter("path_to_file.xlsx", engine="xlsxwriter")
# Or via pandas configuration.
from pandas import options # noqa: E402
options.io.excel.xlsx.writer = "xlsxwriter"
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Style and formatting#
The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the DataFrame’s to_excel method.
float_format : Format string for floating point numbers (default None).
freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze. Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None).
Using the Xlsxwriter engine provides many options for controlling the
format of an Excel worksheet created with the to_excel method. Excellent examples can be found in the
Xlsxwriter documentation here: https://xlsxwriter.readthedocs.io/working_with_pandas.html
OpenDocument Spreadsheets#
New in version 0.25.
The read_excel() method can also read OpenDocument spreadsheets
using the odfpy module. The semantics and features for reading
OpenDocument spreadsheets match what can be done for Excel files using
engine='odf'.
# Returns a DataFrame
pd.read_excel("path_to_file.ods", engine="odf")
Note
Currently pandas only supports reading OpenDocument spreadsheets. Writing
is not implemented.
Binary Excel (.xlsb) files#
New in version 1.0.0.
The read_excel() method can also read binary Excel files
using the pyxlsb module. The semantics and features for reading
binary Excel files mostly match what can be done for Excel files using
engine='pyxlsb'. pyxlsb does not recognize datetime types
in files and will return floats instead.
# Returns a DataFrame
pd.read_excel("path_to_file.xlsb", engine="pyxlsb")
Note
Currently pandas only supports reading binary Excel files. Writing
is not implemented.
Clipboard#
A handy way to grab data is to use the read_clipboard() method,
which takes the contents of the clipboard buffer and passes them to the
read_csv method. For instance, you can copy the following text to the
clipboard (CTRL-C on many operating systems):
A B C
x 1 4 p
y 2 5 q
z 3 6 r
And then import the data directly to a DataFrame by calling:
>>> clipdf = pd.read_clipboard()
>>> clipdf
A B C
x 1 4 p
y 2 5 q
z 3 6 r
The to_clipboard method can be used to write the contents of a DataFrame to
the clipboard. Following which you can paste the clipboard contents into other
applications (CTRL-V on many operating systems). Here we illustrate writing a
DataFrame into clipboard and reading it back.
>>> df = pd.DataFrame(
... {"A": [1, 2, 3], "B": [4, 5, 6], "C": ["p", "q", "r"]}, index=["x", "y", "z"]
... )
>>> df
A B C
x 1 4 p
y 2 5 q
z 3 6 r
>>> df.to_clipboard()
>>> pd.read_clipboard()
A B C
x 1 4 p
y 2 5 q
z 3 6 r
We can see that we got the same content back, which we had earlier written to the clipboard.
Note
You may need to install xclip or xsel (with PyQt5, PyQt4 or qtpy) on Linux to use these methods.
Pickling#
All pandas objects are equipped with to_pickle methods which use Python’s
cPickle module to save data structures to disk using the pickle format.
In [418]: df
Out[418]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
In [419]: df.to_pickle("foo.pkl")
The read_pickle function in the pandas namespace can be used to load
any pickled pandas object (or any other pickled object) from file:
In [420]: pd.read_pickle("foo.pkl")
Out[420]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Warning
Loading pickled data received from untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html
Warning
read_pickle() is only guaranteed backwards compatible back to pandas version 0.20.3
Compressed pickle files#
read_pickle(), DataFrame.to_pickle() and Series.to_pickle() can read
and write compressed pickle files. The compression types of gzip, bz2, xz, zstd are supported for reading and writing.
The zip file format only supports reading and must contain only one data file
to be read.
The compression type can be an explicit parameter or be inferred from the file extension.
If ‘infer’, then use gzip, bz2, zip, xz, zstd if filename ends in '.gz', '.bz2', '.zip',
'.xz', or '.zst', respectively.
The compression parameter can also be a dict in order to pass options to the
compression protocol. It must have a 'method' key set to the name
of the compression protocol, which must be one of
{'zip', 'gzip', 'bz2', 'xz', 'zstd'}. All other key-value pairs are passed to
the underlying compression library.
In [421]: df = pd.DataFrame(
.....: {
.....: "A": np.random.randn(1000),
.....: "B": "foo",
.....: "C": pd.date_range("20130101", periods=1000, freq="s"),
.....: }
.....: )
.....:
In [422]: df
Out[422]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Using an explicit compression type:
In [423]: df.to_pickle("data.pkl.compress", compression="gzip")
In [424]: rt = pd.read_pickle("data.pkl.compress", compression="gzip")
In [425]: rt
Out[425]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Inferring compression type from the extension:
In [426]: df.to_pickle("data.pkl.xz", compression="infer")
In [427]: rt = pd.read_pickle("data.pkl.xz", compression="infer")
In [428]: rt
Out[428]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
The default is to ‘infer’:
In [429]: df.to_pickle("data.pkl.gz")
In [430]: rt = pd.read_pickle("data.pkl.gz")
In [431]: rt
Out[431]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
In [432]: df["A"].to_pickle("s1.pkl.bz2")
In [433]: rt = pd.read_pickle("s1.pkl.bz2")
In [434]: rt
Out[434]:
0 -0.828876
1 -0.110383
2 2.357598
3 -1.620073
4 0.440903
...
995 -1.177365
996 1.236988
997 0.743946
998 -0.533097
999 -0.140850
Name: A, Length: 1000, dtype: float64
Passing options to the compression protocol in order to speed up compression:
In [435]: df.to_pickle("data.pkl.gz", compression={"method": "gzip", "compresslevel": 1})
msgpack#
pandas support for msgpack has been removed in version 1.0.0. It is
recommended to use pickle instead.
Alternatively, you can also the Arrow IPC serialization format for on-the-wire
transmission of pandas objects. For documentation on pyarrow, see
here.
HDF5 (PyTables)#
HDFStore is a dict-like object which reads and writes pandas using
the high performance HDF5 format using the excellent PyTables library. See the cookbook
for some advanced strategies
Warning
pandas uses PyTables for reading and writing HDF5 files, which allows
serializing object-dtype data with pickle. Loading pickled data received from
untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html for more.
In [436]: store = pd.HDFStore("store.h5")
In [437]: print(store)
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Objects can be written to the file just like adding key-value pairs to a
dict:
In [438]: index = pd.date_range("1/1/2000", periods=8)
In [439]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [440]: df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=["A", "B", "C"])
# store.put('s', s) is an equivalent method
In [441]: store["s"] = s
In [442]: store["df"] = df
In [443]: store
Out[443]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In a current or later Python session, you can retrieve stored objects:
# store.get('df') is an equivalent method
In [444]: store["df"]
Out[444]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# dotted (attribute) access provides get as well
In [445]: store.df
Out[445]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Deletion of the object specified by the key:
# store.remove('df') is an equivalent method
In [446]: del store["df"]
In [447]: store
Out[447]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Closing a Store and using a context manager:
In [448]: store.close()
In [449]: store
Out[449]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [450]: store.is_open
Out[450]: False
# Working with, and automatically closing the store using a context manager
In [451]: with pd.HDFStore("store.h5") as store:
.....: store.keys()
.....:
Read/write API#
HDFStore supports a top-level API using read_hdf for reading and to_hdf for writing,
similar to how read_csv and to_csv work.
In [452]: df_tl = pd.DataFrame({"A": list(range(5)), "B": list(range(5))})
In [453]: df_tl.to_hdf("store_tl.h5", "table", append=True)
In [454]: pd.read_hdf("store_tl.h5", "table", where=["index>2"])
Out[454]:
A B
3 3 3
4 4 4
HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting dropna=True.
In [455]: df_with_missing = pd.DataFrame(
.....: {
.....: "col1": [0, np.nan, 2],
.....: "col2": [1, np.nan, np.nan],
.....: }
.....: )
.....:
In [456]: df_with_missing
Out[456]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [457]: df_with_missing.to_hdf("file.h5", "df_with_missing", format="table", mode="w")
In [458]: pd.read_hdf("file.h5", "df_with_missing")
Out[458]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [459]: df_with_missing.to_hdf(
.....: "file.h5", "df_with_missing", format="table", mode="w", dropna=True
.....: )
.....:
In [460]: pd.read_hdf("file.h5", "df_with_missing")
Out[460]:
col1 col2
0 0.0 1.0
2 2.0 NaN
Fixed format#
The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called
the fixed format. These types of stores are not appendable once written (though you can simply
remove them and rewrite). Nor are they queryable; they must be
retrieved in their entirety. They also do not support dataframes with non-unique column names.
The fixed format stores offer very fast writing and slightly faster reading than table stores.
This format is specified by default when using put or to_hdf or by format='fixed' or format='f'.
Warning
A fixed format will raise a TypeError if you try to retrieve using a where:
>>> pd.DataFrame(np.random.randn(10, 2)).to_hdf("test_fixed.h5", "df")
>>> pd.read_hdf("test_fixed.h5", "df", where="index>5")
TypeError: cannot pass a where specification when reading a fixed format.
this store must be selected in its entirety
Table format#
HDFStore supports another PyTables format on disk, the table
format. Conceptually a table is shaped very much like a DataFrame,
with rows and columns. A table may be appended to in the same or
other sessions. In addition, delete and query type operations are
supported. This format is specified by format='table' or format='t'
to append or put or to_hdf.
This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to
enable put/append/to_hdf to by default store in the table format.
In [461]: store = pd.HDFStore("store.h5")
In [462]: df1 = df[0:4]
In [463]: df2 = df[4:]
# append data (creates a table automatically)
In [464]: store.append("df", df1)
In [465]: store.append("df", df2)
In [466]: store
Out[466]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# select the entire object
In [467]: store.select("df")
Out[467]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# the type of stored data
In [468]: store.root.df._v_attrs.pandas_type
Out[468]: 'frame_table'
Note
You can also create a table by passing format='table' or format='t' to a put operation.
Hierarchical keys#
Keys to a store can be specified as a string. These can be in a
hierarchical path-name like format (e.g. foo/bar/bah), which will
generate a hierarchy of sub-stores (or Groups in PyTables
parlance). Keys can be specified without the leading ‘/’ and are always
absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove
everything in the sub-store and below, so be careful.
In [469]: store.put("foo/bar/bah", df)
In [470]: store.append("food/orange", df)
In [471]: store.append("food/apple", df)
In [472]: store
Out[472]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# a list of keys are returned
In [473]: store.keys()
Out[473]: ['/df', '/food/apple', '/food/orange', '/foo/bar/bah']
# remove all nodes under this level
In [474]: store.remove("food")
In [475]: store
Out[475]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
You can walk through the group hierarchy using the walk method which
will yield a tuple for each group key along with the relative keys of its contents.
In [476]: for (path, subgroups, subkeys) in store.walk():
.....: for subgroup in subgroups:
.....: print("GROUP: {}/{}".format(path, subgroup))
.....: for subkey in subkeys:
.....: key = "/".join([path, subkey])
.....: print("KEY: {}".format(key))
.....: print(store.get(key))
.....:
GROUP: /foo
KEY: /df
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
GROUP: /foo/bar
KEY: /foo/bar/bah
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Warning
Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored under the root node.
In [8]: store.foo.bar.bah
AttributeError: 'HDFStore' object has no attribute 'foo'
# you can directly access the actual PyTables node but using the root node
In [9]: store.root.foo.bar.bah
Out[9]:
/foo/bar/bah (Group) ''
children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array), 'axis1' (Array)]
Instead, use explicit string based keys:
In [477]: store["foo/bar/bah"]
Out[477]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Storing types#
Storing mixed types in a table#
Storing mixed-dtype data is supported. Strings are stored as a
fixed-width using the maximum size of the appended column. Subsequent attempts
at appending longer strings will raise a ValueError.
Passing min_itemsize={`values`: size} as a parameter to append
will set a larger minimum for the string columns. Storing floats,
strings, ints, bools, datetime64 are currently supported. For string
columns, passing nan_rep = 'nan' to append will change the default
nan representation on disk (which converts to/from np.nan), this
defaults to nan.
In [478]: df_mixed = pd.DataFrame(
.....: {
.....: "A": np.random.randn(8),
.....: "B": np.random.randn(8),
.....: "C": np.array(np.random.randn(8), dtype="float32"),
.....: "string": "string",
.....: "int": 1,
.....: "bool": True,
.....: "datetime64": pd.Timestamp("20010102"),
.....: },
.....: index=list(range(8)),
.....: )
.....:
In [479]: df_mixed.loc[df_mixed.index[3:5], ["A", "B", "string", "datetime64"]] = np.nan
In [480]: store.append("df_mixed", df_mixed, min_itemsize={"values": 50})
In [481]: df_mixed1 = store.select("df_mixed")
In [482]: df_mixed1
Out[482]:
A B C string int bool datetime64
0 1.778161 -0.898283 -0.263043 string 1 True 2001-01-02
1 -0.913867 -0.218499 -0.639244 string 1 True 2001-01-02
2 -0.030004 1.408028 -0.866305 string 1 True 2001-01-02
3 NaN NaN -0.225250 NaN 1 True NaT
4 NaN NaN -0.890978 NaN 1 True NaT
5 0.081323 0.520995 -0.553839 string 1 True 2001-01-02
6 -0.268494 0.620028 -2.762875 string 1 True 2001-01-02
7 0.168016 0.159416 -1.244763 string 1 True 2001-01-02
In [483]: df_mixed1.dtypes.value_counts()
Out[483]:
float64 2
float32 1
object 1
int64 1
bool 1
datetime64[ns] 1
dtype: int64
# we have provided a minimum string column size
In [484]: store.root.df_mixed.table
Out[484]:
/df_mixed/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(2,), dflt=0.0, pos=1),
"values_block_1": Float32Col(shape=(1,), dflt=0.0, pos=2),
"values_block_2": StringCol(itemsize=50, shape=(1,), dflt=b'', pos=3),
"values_block_3": Int64Col(shape=(1,), dflt=0, pos=4),
"values_block_4": BoolCol(shape=(1,), dflt=False, pos=5),
"values_block_5": Int64Col(shape=(1,), dflt=0, pos=6)}
byteorder := 'little'
chunkshape := (689,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
Storing MultiIndex DataFrames#
Storing MultiIndex DataFrames as tables is very similar to
storing/selecting from homogeneous index DataFrames.
In [485]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: names=["foo", "bar"],
.....: )
.....:
In [486]: df_mi = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [487]: df_mi
Out[487]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
In [488]: store.append("df_mi", df_mi)
In [489]: store.select("df_mi")
Out[489]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
# the levels are automatically included as data columns
In [490]: store.select("df_mi", "foo=bar")
Out[490]:
A B C
foo bar
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
Note
The index keyword is reserved and cannot be use as a level name.
Querying#
Querying a table#
select and delete operations have an optional criterion that can
be specified to select/delete only a subset of the data. This allows one
to have a very large on-disk table and retrieve only a portion of the
data.
A query is specified using the Term class under the hood, as a boolean expression.
index and columns are supported indexers of DataFrames.
if data_columns are specified, these can be used as additional indexers.
level name in a MultiIndex, with default name level_0, level_1, … if not provided.
Valid comparison operators are:
=, ==, !=, >, >=, <, <=
Valid boolean expressions are combined with:
| : or
& : and
( and ) : for grouping
These rules are similar to how boolean expressions are used in pandas for indexing.
Note
= will be automatically expanded to the comparison operator ==
~ is the not operator, but can only be used in very limited
circumstances
If a list/tuple of expressions is passed they will be combined via &
The following are valid expressions:
'index >= date'
"columns = ['A', 'D']"
"columns in ['A', 'D']"
'columns = A'
'columns == A'
"~(columns = ['A', 'B'])"
'index > df.index[3] & string = "bar"'
'(index > df.index[3] & index <= df.index[6]) | string = "bar"'
"ts >= Timestamp('2012-02-01')"
"major_axis>=20130101"
The indexers are on the left-hand side of the sub-expression:
columns, major_axis, ts
The right-hand side of the sub-expression (after a comparison operator) can be:
functions that will be evaluated, e.g. Timestamp('2012-02-01')
strings, e.g. "bar"
date-like, e.g. 20130101, or "20130101"
lists, e.g. "['A', 'B']"
variables that are defined in the local names space, e.g. date
Note
Passing a string to a query by interpolating it into the query
expression is not recommended. Simply assign the string of interest to a
variable and use that variable in an expression. For example, do this
string = "HolyMoly'"
store.select("df", "index == string")
instead of this
string = "HolyMoly'"
store.select('df', f'index == {string}')
The latter will not work and will raise a SyntaxError.Note that
there’s a single quote followed by a double quote in the string
variable.
If you must interpolate, use the '%r' format specifier
store.select("df", "index == %r" % string)
which will quote string.
Here are some examples:
In [491]: dfq = pd.DataFrame(
.....: np.random.randn(10, 4),
.....: columns=list("ABCD"),
.....: index=pd.date_range("20130101", periods=10),
.....: )
.....:
In [492]: store.append("dfq", dfq, format="table", data_columns=True)
Use boolean expressions, with in-line function evaluation.
In [493]: store.select("dfq", "index>pd.Timestamp('20130104') & columns=['A', 'B']")
Out[493]:
A B
2013-01-05 1.366810 1.073372
2013-01-06 2.119746 -2.628174
2013-01-07 0.337920 -0.634027
2013-01-08 1.053434 1.109090
2013-01-09 -0.772942 -0.269415
2013-01-10 0.048562 -0.285920
Use inline column reference.
In [494]: store.select("dfq", where="A>0 or C>0")
Out[494]:
A B C D
2013-01-01 0.856838 1.491776 0.001283 0.701816
2013-01-02 -1.097917 0.102588 0.661740 0.443531
2013-01-03 0.559313 -0.459055 -1.222598 -0.455304
2013-01-05 1.366810 1.073372 -0.994957 0.755314
2013-01-06 2.119746 -2.628174 -0.089460 -0.133636
2013-01-07 0.337920 -0.634027 0.421107 0.604303
2013-01-08 1.053434 1.109090 -0.367891 -0.846206
2013-01-10 0.048562 -0.285920 1.334100 0.194462
The columns keyword can be supplied to select a list of columns to be
returned, this is equivalent to passing a
'columns=list_of_columns_to_filter':
In [495]: store.select("df", "columns=['A', 'B']")
Out[495]:
A B
2000-01-01 -0.398501 -0.677311
2000-01-02 -1.167564 -0.593353
2000-01-03 -0.131959 0.089012
2000-01-04 0.169405 -1.358046
2000-01-05 0.492195 0.076693
2000-01-06 -0.285283 -1.210529
2000-01-07 0.941577 -0.342447
2000-01-08 0.052607 2.093214
start and stop parameters can be specified to limit the total search
space. These are in terms of the total number of rows in a table.
Note
select will raise a ValueError if the query expression has an unknown
variable reference. Usually this means that you are trying to select on a column
that is not a data_column.
select will raise a SyntaxError if the query expression is not valid.
Query timedelta64[ns]#
You can store and query using the timedelta64[ns] type. Terms can be
specified in the format: <float>(<unit>), where float may be signed (and fractional), and unit can be
D,s,ms,us,ns for the timedelta. Here’s an example:
In [496]: from datetime import timedelta
In [497]: dftd = pd.DataFrame(
.....: {
.....: "A": pd.Timestamp("20130101"),
.....: "B": [
.....: pd.Timestamp("20130101") + timedelta(days=i, seconds=10)
.....: for i in range(10)
.....: ],
.....: }
.....: )
.....:
In [498]: dftd["C"] = dftd["A"] - dftd["B"]
In [499]: dftd
Out[499]:
A B C
0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50
1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50
2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50
3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
In [500]: store.append("dftd", dftd, data_columns=True)
In [501]: store.select("dftd", "C<'-3.5D'")
Out[501]:
A B C
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
Query MultiIndex#
Selecting from a MultiIndex can be achieved by using the name of the level.
In [502]: df_mi.index.names
Out[502]: FrozenList(['foo', 'bar'])
In [503]: store.select("df_mi", "foo=baz and bar=two")
Out[503]:
A B C
foo bar
baz two 0.183573 0.145277 0.308146
If the MultiIndex levels names are None, the levels are automatically made available via
the level_n keyword with n the level of the MultiIndex you want to select from.
In [504]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: )
.....:
In [505]: df_mi_2 = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [506]: df_mi_2
Out[506]:
A B C
foo one -0.646538 1.210676 -0.315409
two 1.528366 0.376542 0.174490
three 1.247943 -0.742283 0.710400
bar one 0.434128 -1.246384 1.139595
two 1.388668 -0.413554 -0.666287
baz two 0.010150 -0.163820 -0.115305
three 0.216467 0.633720 0.473945
qux one -0.155446 1.287082 0.320201
two -1.256989 0.874920 0.765944
three 0.025557 -0.729782 -0.127439
In [507]: store.append("df_mi_2", df_mi_2)
# the levels are automatically included as data columns with keyword level_n
In [508]: store.select("df_mi_2", "level_0=foo and level_1=two")
Out[508]:
A B C
foo two 1.528366 0.376542 0.17449
Indexing#
You can create/modify an index for a table with create_table_index
after data is already in the table (after and append/put
operation). Creating a table index is highly encouraged. This will
speed your queries a great deal when you use a select with the
indexed dimension as the where.
Note
Indexes are automagically created on the indexables
and any data columns you specify. This behavior can be turned off by passing
index=False to append.
# we have automagically already created an index (in the first section)
In [509]: i = store.root.df.table.cols.index.index
In [510]: i.optlevel, i.kind
Out[510]: (6, 'medium')
# change an index by passing new parameters
In [511]: store.create_table_index("df", optlevel=9, kind="full")
In [512]: i = store.root.df.table.cols.index.index
In [513]: i.optlevel, i.kind
Out[513]: (9, 'full')
Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append, then recreate at the end.
In [514]: df_1 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [515]: df_2 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [516]: st = pd.HDFStore("appends.h5", mode="w")
In [517]: st.append("df", df_1, data_columns=["B"], index=False)
In [518]: st.append("df", df_2, data_columns=["B"], index=False)
In [519]: st.get_storer("df").table
Out[519]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
Then create the index when finished appending.
In [520]: st.create_table_index("df", columns=["B"], optlevel=9, kind="full")
In [521]: st.get_storer("df").table
Out[521]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
autoindex := True
colindexes := {
"B": Index(9, fullshuffle, zlib(1)).is_csi=True}
In [522]: st.close()
See here for how to create a completely-sorted-index (CSI) on an existing store.
Query via data columns#
You can designate (and index) certain columns that you want to be able
to perform queries (other than the indexable columns, which you can
always query). For instance say you want to perform this common
operation, on-disk, and return just the frame that matches this
query. You can specify data_columns = True to force all columns to
be data_columns.
In [523]: df_dc = df.copy()
In [524]: df_dc["string"] = "foo"
In [525]: df_dc.loc[df_dc.index[4:6], "string"] = np.nan
In [526]: df_dc.loc[df_dc.index[7:9], "string"] = "bar"
In [527]: df_dc["string2"] = "cool"
In [528]: df_dc.loc[df_dc.index[1:3], ["B", "C"]] = 1.0
In [529]: df_dc
Out[529]:
A B C string string2
2000-01-01 -0.398501 -0.677311 -0.874991 foo cool
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-04 0.169405 -1.358046 -0.105563 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-06 -0.285283 -1.210529 -1.408386 NaN cool
2000-01-07 0.941577 -0.342447 0.222031 foo cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# on-disk operations
In [530]: store.append("df_dc", df_dc, data_columns=["B", "C", "string", "string2"])
In [531]: store.select("df_dc", where="B > 0")
Out[531]:
A B C string string2
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# getting creative
In [532]: store.select("df_dc", "B > 0 & C > 0 & string == foo")
Out[532]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# this is in-memory version of this type of selection
In [533]: df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == "foo")]
Out[533]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# we have automagically created this index and the B/C/string/string2
# columns are stored separately as ``PyTables`` columns
In [534]: store.root.df_dc.table
Out[534]:
/df_dc/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2),
"C": Float64Col(shape=(), dflt=0.0, pos=3),
"string": StringCol(itemsize=3, shape=(), dflt=b'', pos=4),
"string2": StringCol(itemsize=4, shape=(), dflt=b'', pos=5)}
byteorder := 'little'
chunkshape := (1680,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"B": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"C": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string2": Index(6, mediumshuffle, zlib(1)).is_csi=False}
There is some performance degradation by making lots of columns into
data columns, so it is up to the user to designate these. In addition,
you cannot change data columns (nor indexables) after the first
append/put operation (Of course you can simply read in the data and
create a new table!).
Iterator#
You can pass iterator=True or chunksize=number_in_a_chunk
to select and select_as_multiple to return an iterator on the results.
The default is 50,000 rows returned in a chunk.
In [535]: for df in store.select("df", chunksize=3):
.....: print(df)
.....:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
A B C
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
A B C
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Note
You can also use the iterator with read_hdf which will open, then
automatically close the store when finished iterating.
for df in pd.read_hdf("store.h5", "df", chunksize=3):
print(df)
Note, that the chunksize keyword applies to the source rows. So if you
are doing a query, then the chunksize will subdivide the total rows in the table
and the query applied, returning an iterator on potentially unequal sized chunks.
Here is a recipe for generating a query and using it to create equal sized return
chunks.
In [536]: dfeq = pd.DataFrame({"number": np.arange(1, 11)})
In [537]: dfeq
Out[537]:
number
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
In [538]: store.append("dfeq", dfeq, data_columns=["number"])
In [539]: def chunks(l, n):
.....: return [l[i: i + n] for i in range(0, len(l), n)]
.....:
In [540]: evens = [2, 4, 6, 8, 10]
In [541]: coordinates = store.select_as_coordinates("dfeq", "number=evens")
In [542]: for c in chunks(coordinates, 2):
.....: print(store.select("dfeq", where=c))
.....:
number
1 2
3 4
number
5 6
7 8
number
9 10
Advanced queries#
Select a single column#
To retrieve a single indexable or data column, use the
method select_column. This will, for example, enable you to get the index
very quickly. These return a Series of the result, indexed by the row number.
These do not currently accept the where selector.
In [543]: store.select_column("df_dc", "index")
Out[543]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
3 2000-01-04
4 2000-01-05
5 2000-01-06
6 2000-01-07
7 2000-01-08
Name: index, dtype: datetime64[ns]
In [544]: store.select_column("df_dc", "string")
Out[544]:
0 foo
1 foo
2 foo
3 foo
4 NaN
5 NaN
6 foo
7 bar
Name: string, dtype: object
Selecting coordinates#
Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an
Int64Index of the resulting locations. These coordinates can also be passed to subsequent
where operations.
In [545]: df_coord = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [546]: store.append("df_coord", df_coord)
In [547]: c = store.select_as_coordinates("df_coord", "index > 20020101")
In [548]: c
Out[548]:
Int64Index([732, 733, 734, 735, 736, 737, 738, 739, 740, 741,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999],
dtype='int64', length=268)
In [549]: store.select("df_coord", where=c)
Out[549]:
0 1
2002-01-02 0.009035 0.921784
2002-01-03 -1.476563 -1.376375
2002-01-04 1.266731 2.173681
2002-01-05 0.147621 0.616468
2002-01-06 0.008611 2.136001
... ... ...
2002-09-22 0.781169 -0.791687
2002-09-23 -0.764810 -2.000933
2002-09-24 -0.345662 0.393915
2002-09-25 -0.116661 0.834638
2002-09-26 -1.341780 0.686366
[268 rows x 2 columns]
Selecting using a where mask#
Sometime your query can involve creating a list of rows to select. Usually this mask would
be a resulting index from an indexing operation. This example selects the months of
a datetimeindex which are 5.
In [550]: df_mask = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [551]: store.append("df_mask", df_mask)
In [552]: c = store.select_column("df_mask", "index")
In [553]: where = c[pd.DatetimeIndex(c).month == 5].index
In [554]: store.select("df_mask", where=where)
Out[554]:
0 1
2000-05-01 -0.386742 -0.977433
2000-05-02 -0.228819 0.471671
2000-05-03 0.337307 1.840494
2000-05-04 0.050249 0.307149
2000-05-05 -0.802947 -0.946730
... ... ...
2002-05-27 1.605281 1.741415
2002-05-28 -0.804450 -0.715040
2002-05-29 -0.874851 0.037178
2002-05-30 -0.161167 -1.294944
2002-05-31 -0.258463 -0.731969
[93 rows x 2 columns]
Storer object#
If you want to inspect the stored object, retrieve via
get_storer. You could use this programmatically to say get the number
of rows in an object.
In [555]: store.get_storer("df_dc").nrows
Out[555]: 8
Multiple table queries#
The methods append_to_multiple and
select_as_multiple can perform appending/selecting from
multiple tables at once. The idea is to have one table (call it the
selector table) that you index most/all of the columns, and perform your
queries. The other table(s) are data tables with an index matching the
selector table’s index. You can then perform a very fast query
on the selector table, yet get lots of data back. This method is similar to
having a very wide table, but enables more efficient queries.
The append_to_multiple method splits a given single DataFrame
into multiple tables according to d, a dictionary that maps the
table names to a list of ‘columns’ you want in that table. If None
is used in place of a list, that table will have the remaining
unspecified columns of the given DataFrame. The argument selector
defines which table is the selector table (which you can make queries from).
The argument dropna will drop rows from the input DataFrame to ensure
tables are synchronized. This means that if a row for one of the tables
being written to is entirely np.NaN, that row will be dropped from all tables.
If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES.
Remember that entirely np.Nan rows are not written to the HDFStore, so if
you choose to call dropna=False, some tables may have more rows than others,
and therefore select_as_multiple may not work or it may return unexpected
results.
In [556]: df_mt = pd.DataFrame(
.....: np.random.randn(8, 6),
.....: index=pd.date_range("1/1/2000", periods=8),
.....: columns=["A", "B", "C", "D", "E", "F"],
.....: )
.....:
In [557]: df_mt["foo"] = "bar"
In [558]: df_mt.loc[df_mt.index[1], ("A", "B")] = np.nan
# you can also create the tables individually
In [559]: store.append_to_multiple(
.....: {"df1_mt": ["A", "B"], "df2_mt": None}, df_mt, selector="df1_mt"
.....: )
.....:
In [560]: store
Out[560]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# individual tables were created
In [561]: store.select("df1_mt")
Out[561]:
A B
2000-01-01 0.079529 -1.459471
2000-01-02 NaN NaN
2000-01-03 -0.423113 2.314361
2000-01-04 0.756744 -0.792372
2000-01-05 -0.184971 0.170852
2000-01-06 0.678830 0.633974
2000-01-07 0.034973 0.974369
2000-01-08 -2.110103 0.243062
In [562]: store.select("df2_mt")
Out[562]:
C D E F foo
2000-01-01 -0.596306 -0.910022 -1.057072 -0.864360 bar
2000-01-02 0.477849 0.283128 -2.045700 -0.338206 bar
2000-01-03 -0.033100 -0.965461 -0.001079 -0.351689 bar
2000-01-04 -0.513555 -1.484776 -0.796280 -0.182321 bar
2000-01-05 -0.872407 -1.751515 0.934334 0.938818 bar
2000-01-06 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 -0.755544 0.380786 -1.634116 1.293610 bar
2000-01-08 1.453064 0.500558 -0.574475 0.694324 bar
# as a multiple
In [563]: store.select_as_multiple(
.....: ["df1_mt", "df2_mt"],
.....: where=["A>0", "B>0"],
.....: selector="df1_mt",
.....: )
.....:
Out[563]:
A B C D E F foo
2000-01-06 0.678830 0.633974 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 0.034973 0.974369 -0.755544 0.380786 -1.634116 1.293610 bar
Delete from a table#
You can delete from a table selectively by specifying a where. In
deleting rows, it is important to understand the PyTables deletes
rows by erasing the rows, then moving the following data. Thus
deleting can potentially be a very expensive operation depending on the
orientation of your data. To get optimal performance, it’s
worthwhile to have the dimension you are deleting be the first of the
indexables.
Data is ordered (on the disk) in terms of the indexables. Here’s a
simple use case. You store panel-type data, with dates in the
major_axis and ids in the minor_axis. The data is then
interleaved like this:
date_1
id_1
id_2
.
id_n
date_2
id_1
.
id_n
It should be clear that a delete operation on the major_axis will be
fairly quick, as one chunk is removed, then the following data moved. On
the other hand a delete operation on the minor_axis will be very
expensive. In this case it would almost certainly be faster to rewrite
the table using a where that selects all but the missing data.
Warning
Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files
automatically. Thus, repeatedly deleting (or removing nodes) and adding
again, WILL TEND TO INCREASE THE FILE SIZE.
To repack and clean the file, use ptrepack.
Notes & caveats#
Compression#
PyTables allows the stored data to be compressed. This applies to
all kinds of stores, not just tables. Two parameters are used to
control compression: complevel and complib.
complevel specifies if and how hard data is to be compressed.
complevel=0 and complevel=None disables compression and
0<complevel<10 enables compression.
complib specifies which compression library to use.
If nothing is specified the default library zlib is used. A
compression library usually optimizes for either good compression rates
or speed and the results will depend on the type of data. Which type of
compression to choose depends on your specific needs and data. The list
of supported compression libraries:
zlib: The default compression library.
A classic in terms of compression, achieves good compression
rates but is somewhat slow.
lzo: Fast
compression and decompression.
bzip2: Good compression rates.
blosc: Fast compression and
decompression.
Support for alternative blosc compressors:
blosc:blosclz This is the
default compressor for blosc
blosc:lz4:
A compact, very popular and fast compressor.
blosc:lz4hc:
A tweaked version of LZ4, produces better
compression ratios at the expense of speed.
blosc:snappy:
A popular compressor used in many places.
blosc:zlib: A classic;
somewhat slower than the previous ones, but
achieving better compression ratios.
blosc:zstd: An
extremely well balanced codec; it provides the best
compression ratios among the others above, and at
reasonably fast speed.
If complib is defined as something other than the listed libraries a
ValueError exception is issued.
Note
If the library specified with the complib option is missing on your platform,
compression defaults to zlib without further ado.
Enable compression for all objects within the file:
store_compressed = pd.HDFStore(
"store_compressed.h5", complevel=9, complib="blosc:blosclz"
)
Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled:
store.append("df", df, complib="zlib", complevel=5)
ptrepack#
PyTables offers better write performance when tables are compressed after
they are written, as opposed to turning on compression at the very
beginning. You can use the supplied PyTables utility
ptrepack. In addition, ptrepack can change compression levels
after the fact.
ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5
Furthermore ptrepack in.h5 out.h5 will repack the file to allow
you to reuse previously deleted space. Alternatively, one can simply
remove the file and write again, or use the copy method.
Caveats#
Warning
HDFStore is not-threadsafe for writing. The underlying
PyTables only supports concurrent reads (via threading or
processes). If you need reading and writing at the same time, you
need to serialize these operations in a single thread in a single
process. You will corrupt your data otherwise. See the (GH2397) for more information.
If you use locks to manage write access between multiple processes, you
may want to use fsync() before releasing write locks. For
convenience you can use store.flush(fsync=True) to do this for you.
Once a table is created columns (DataFrame)
are fixed; only exactly the same columns can be appended
Be aware that timezones (e.g., pytz.timezone('US/Eastern'))
are not necessarily equal across timezone versions. So if data is
localized to a specific timezone in the HDFStore using one version
of a timezone library and that data is updated with another version, the data
will be converted to UTC since these timezones are not considered
equal. Either use the same version of timezone library or use tz_convert with
the updated timezone definition.
Warning
PyTables will show a NaturalNameWarning if a column name
cannot be used as an attribute selector.
Natural identifiers contain only letters, numbers, and underscores,
and may not begin with a number.
Other identifiers cannot be used in a where clause
and are generally a bad idea.
DataTypes#
HDFStore will map an object dtype to the PyTables underlying
dtype. This means the following types are known to work:
Type
Represents missing values
floating : float64, float32, float16
np.nan
integer : int64, int32, int8, uint64,uint32, uint8
boolean
datetime64[ns]
NaT
timedelta64[ns]
NaT
categorical : see the section below
object : strings
np.nan
unicode columns are not supported, and WILL FAIL.
Categorical data#
You can write data that contains category dtypes to a HDFStore.
Queries work the same as if it was an object array. However, the category dtyped data is
stored in a more efficient manner.
In [564]: dfcat = pd.DataFrame(
.....: {"A": pd.Series(list("aabbcdba")).astype("category"), "B": np.random.randn(8)}
.....: )
.....:
In [565]: dfcat
Out[565]:
A B
0 a -1.608059
1 a 0.851060
2 b -0.736931
3 b 0.003538
4 c -1.422611
5 d 2.060901
6 b 0.993899
7 a -1.371768
In [566]: dfcat.dtypes
Out[566]:
A category
B float64
dtype: object
In [567]: cstore = pd.HDFStore("cats.h5", mode="w")
In [568]: cstore.append("dfcat", dfcat, format="table", data_columns=["A"])
In [569]: result = cstore.select("dfcat", where="A in ['b', 'c']")
In [570]: result
Out[570]:
A B
2 b -0.736931
3 b 0.003538
4 c -1.422611
6 b 0.993899
In [571]: result.dtypes
Out[571]:
A category
B float64
dtype: object
String columns#
min_itemsize
The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns.
A string column itemsize is calculated as the maximum of the
length of data (for that column) that is passed to the HDFStore, in the first append. Subsequent appends,
may introduce a string for a column larger than the column can hold, an Exception will be raised (otherwise you
could have a silent truncation of these columns, leading to loss of information). In the future we may relax this and
allow a user-specified truncation to occur.
Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key to
allow all indexables or data_columns to have this min_itemsize.
Passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.
Note
If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of any string passed
In [572]: dfs = pd.DataFrame({"A": "foo", "B": "bar"}, index=list(range(5)))
In [573]: dfs
Out[573]:
A B
0 foo bar
1 foo bar
2 foo bar
3 foo bar
4 foo bar
# A and B have a size of 30
In [574]: store.append("dfs", dfs, min_itemsize=30)
In [575]: store.get_storer("dfs").table
Out[575]:
/dfs/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=30, shape=(2,), dflt=b'', pos=1)}
byteorder := 'little'
chunkshape := (963,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
# A is created as a data_column with a size of 30
# B is size is calculated
In [576]: store.append("dfs2", dfs, min_itemsize={"A": 30})
In [577]: store.get_storer("dfs2").table
Out[577]:
/dfs2/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1),
"A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)}
byteorder := 'little'
chunkshape := (1598,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"A": Index(6, mediumshuffle, zlib(1)).is_csi=False}
nan_rep
String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to the string value nan.
You could inadvertently turn an actual nan value into a missing value.
In [578]: dfss = pd.DataFrame({"A": ["foo", "bar", "nan"]})
In [579]: dfss
Out[579]:
A
0 foo
1 bar
2 nan
In [580]: store.append("dfss", dfss)
In [581]: store.select("dfss")
Out[581]:
A
0 foo
1 bar
2 NaN
# here you need to specify a different nan rep
In [582]: store.append("dfss2", dfss, nan_rep="_nan_")
In [583]: store.select("dfss2")
Out[583]:
A
0 foo
1 bar
2 nan
External compatibility#
HDFStore writes table format objects in specific formats suitable for
producing loss-less round trips to pandas objects. For external
compatibility, HDFStore can read native PyTables format
tables.
It is possible to write an HDFStore object that can easily be imported into R using the
rhdf5 library (Package website). Create a table format store like this:
In [584]: df_for_r = pd.DataFrame(
.....: {
.....: "first": np.random.rand(100),
.....: "second": np.random.rand(100),
.....: "class": np.random.randint(0, 2, (100,)),
.....: },
.....: index=range(100),
.....: )
.....:
In [585]: df_for_r.head()
Out[585]:
first second class
0 0.013480 0.504941 0
1 0.690984 0.898188 1
2 0.510113 0.618748 1
3 0.357698 0.004972 0
4 0.451658 0.012065 1
In [586]: store_export = pd.HDFStore("export.h5")
In [587]: store_export.append("df_for_r", df_for_r, data_columns=df_dc.columns)
In [588]: store_export
Out[588]:
<class 'pandas.io.pytables.HDFStore'>
File path: export.h5
In R this file can be read into a data.frame object using the rhdf5
library. The following example function reads the corresponding column names
and data values from the values and assembles them into a data.frame:
# Load values and column names for all datasets from corresponding nodes and
# insert them into one data.frame object.
library(rhdf5)
loadhdf5data <- function(h5File) {
listing <- h5ls(h5File)
# Find all data nodes, values are stored in *_values and corresponding column
# titles in *_items
data_nodes <- grep("_values", listing$name)
name_nodes <- grep("_items", listing$name)
data_paths = paste(listing$group[data_nodes], listing$name[data_nodes], sep = "/")
name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/")
columns = list()
for (idx in seq(data_paths)) {
# NOTE: matrices returned by h5read have to be transposed to obtain
# required Fortran order!
data <- data.frame(t(h5read(h5File, data_paths[idx])))
names <- t(h5read(h5File, name_paths[idx]))
entry <- data.frame(data)
colnames(entry) <- names
columns <- append(columns, entry)
}
data <- data.frame(columns)
return(data)
}
Now you can import the DataFrame into R:
> data = loadhdf5data("transfer.hdf5")
> head(data)
first second class
1 0.4170220047 0.3266449 0
2 0.7203244934 0.5270581 0
3 0.0001143748 0.8859421 1
4 0.3023325726 0.3572698 1
5 0.1467558908 0.9085352 1
6 0.0923385948 0.6233601 1
Note
The R function lists the entire HDF5 file’s contents and assembles the
data.frame object from all matching nodes, so use this only as a
starting point if you have stored multiple DataFrame objects to a
single HDF5 file.
Performance#
tables format come with a writing performance penalty as compared to
fixed stores. The benefit is the ability to append/delete and
query (potentially very large amounts of data). Write times are
generally longer as compared with regular stores. Query times can
be quite fast, especially on an indexed axis.
You can pass chunksize=<int> to append, specifying the
write chunksize (default is 50000). This will significantly lower
your memory usage on writing.
You can pass expectedrows=<int> to the first append,
to set the TOTAL number of rows that PyTables will expect.
This will optimize read/write performance.
Duplicate rows can be written to tables, but are filtered out in
selection (with the last items being selected; thus a table is
unique on major, minor pairs)
A PerformanceWarning will be raised if you are attempting to
store types that will be pickled by PyTables (rather than stored as
endemic types). See
Here
for more information and some solutions.
Feather#
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
frames efficient, and to make sharing data across data analysis languages easy.
Feather is designed to faithfully serialize and de-serialize DataFrames, supporting all of the pandas
dtypes, including extension dtypes such as categorical and datetime with tz.
Several caveats:
The format will NOT write an Index, or MultiIndex for the
DataFrame and will raise an error if a non-default one is provided. You
can .reset_index() to store the index or .reset_index(drop=True) to
ignore it.
Duplicate column names and non-string columns names are not supported
Actual Python objects in object dtype columns are not supported. These will
raise a helpful error message on an attempt at serialization.
See the Full Documentation.
In [589]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.Categorical(list("abc")),
.....: "g": pd.date_range("20130101", periods=3),
.....: "h": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "i": pd.date_range("20130101", periods=3, freq="ns"),
.....: }
.....: )
.....:
In [590]: df
Out[590]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
In [591]: df.dtypes
Out[591]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Write to a feather file.
In [592]: df.to_feather("example.feather")
Read from a feather file.
In [593]: result = pd.read_feather("example.feather")
In [594]: result
Out[594]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
# we preserve dtypes
In [595]: result.dtypes
Out[595]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Parquet#
Apache Parquet provides a partitioned binary columnar serialization for data frames. It is designed to
make reading and writing data frames efficient, and to make sharing data across data analysis
languages easy. Parquet can use a variety of compression techniques to shrink the file size as much as possible
while still maintaining good read performance.
Parquet is designed to faithfully serialize and de-serialize DataFrame s, supporting all of the pandas
dtypes, including extension dtypes such as datetime with tz.
Several caveats.
Duplicate column names and non-string columns names are not supported.
The pyarrow engine always writes the index to the output, but fastparquet only writes non-default
indexes. This extra column can cause problems for non-pandas consumers that are not expecting it. You can
force including or omitting indexes with the index argument, regardless of the underlying engine.
Index level names, if specified, must be strings.
In the pyarrow engine, categorical dtypes for non-string types can be serialized to parquet, but will de-serialize as their primitive dtype.
The pyarrow engine preserves the ordered flag of categorical dtypes with string types. fastparquet does not preserve the ordered flag.
Non supported types include Interval and actual Python object types. These will raise a helpful error message
on an attempt at serialization. Period type is supported with pyarrow >= 0.16.0.
The pyarrow engine preserves extension data types such as the nullable integer and string data
type (requiring pyarrow >= 0.16.0, and requiring the extension type to implement the needed protocols,
see the extension types documentation).
You can specify an engine to direct the serialization. This can be one of pyarrow, or fastparquet, or auto.
If the engine is NOT specified, then the pd.options.io.parquet.engine option is checked; if this is also auto,
then pyarrow is tried, and falling back to fastparquet.
See the documentation for pyarrow and fastparquet.
Note
These engines are very similar and should read/write nearly identical parquet format files.
pyarrow>=8.0.0 supports timedelta data, fastparquet>=0.1.4 supports timezone aware datetimes.
These libraries differ by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library).
In [596]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.date_range("20130101", periods=3),
.....: "g": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "h": pd.Categorical(list("abc")),
.....: "i": pd.Categorical(list("abc"), ordered=True),
.....: }
.....: )
.....:
In [597]: df
Out[597]:
a b c d e f g h i
0 a 1 3 4.0 True 2013-01-01 2013-01-01 00:00:00-05:00 a a
1 b 2 4 5.0 False 2013-01-02 2013-01-02 00:00:00-05:00 b b
2 c 3 5 6.0 True 2013-01-03 2013-01-03 00:00:00-05:00 c c
In [598]: df.dtypes
Out[598]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Write to a parquet file.
In [599]: df.to_parquet("example_pa.parquet", engine="pyarrow")
In [600]: df.to_parquet("example_fp.parquet", engine="fastparquet")
Read from a parquet file.
In [601]: result = pd.read_parquet("example_fp.parquet", engine="fastparquet")
In [602]: result = pd.read_parquet("example_pa.parquet", engine="pyarrow")
In [603]: result.dtypes
Out[603]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Read only certain columns of a parquet file.
In [604]: result = pd.read_parquet(
.....: "example_fp.parquet",
.....: engine="fastparquet",
.....: columns=["a", "b"],
.....: )
.....:
In [605]: result = pd.read_parquet(
.....: "example_pa.parquet",
.....: engine="pyarrow",
.....: columns=["a", "b"],
.....: )
.....:
In [606]: result.dtypes
Out[606]:
a object
b int64
dtype: object
Handling indexes#
Serializing a DataFrame to parquet may include the implicit index as one or
more columns in the output file. Thus, this code:
In [607]: df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
In [608]: df.to_parquet("test.parquet", engine="pyarrow")
creates a parquet file with three columns if you use pyarrow for serialization:
a, b, and __index_level_0__. If you’re using fastparquet, the
index may or may not
be written to the file.
This unexpected extra column causes some databases like Amazon Redshift to reject
the file, because that column doesn’t exist in the target table.
If you want to omit a dataframe’s indexes when writing, pass index=False to
to_parquet():
In [609]: df.to_parquet("test.parquet", index=False)
This creates a parquet file with just the two expected columns, a and b.
If your DataFrame has a custom index, you won’t get it back when you load
this file into a DataFrame.
Passing index=True will always write the index, even if that’s not the
underlying engine’s default behavior.
Partitioning Parquet files#
Parquet supports partitioning of data based on the values of one or more columns.
In [610]: df = pd.DataFrame({"a": [0, 0, 1, 1], "b": [0, 1, 0, 1]})
In [611]: df.to_parquet(path="test", engine="pyarrow", partition_cols=["a"], compression=None)
The path specifies the parent directory to which data will be saved.
The partition_cols are the column names by which the dataset will be partitioned.
Columns are partitioned in the order they are given. The partition splits are
determined by the unique values in the partition columns.
The above example creates a partitioned dataset that may look like:
test
├── a=0
│ ├── 0bac803e32dc42ae83fddfd029cbdebc.parquet
│ └── ...
└── a=1
├── e6ab24a4f45147b49b54a662f0c412a3.parquet
└── ...
ORC#
New in version 1.0.0.
Similar to the parquet format, the ORC Format is a binary columnar serialization
for data frames. It is designed to make reading data frames efficient. pandas provides both the reader and the writer for the
ORC format, read_orc() and to_orc(). This requires the pyarrow library.
Warning
It is highly recommended to install pyarrow using conda due to some issues occurred by pyarrow.
to_orc() requires pyarrow>=7.0.0.
read_orc() and to_orc() are not supported on Windows yet, you can find valid environments on install optional dependencies.
For supported dtypes please refer to supported ORC features in Arrow.
Currently timezones in datetime columns are not preserved when a dataframe is converted into ORC files.
In [612]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(4.0, 7.0, dtype="float64"),
.....: "d": [True, False, True],
.....: "e": pd.date_range("20130101", periods=3),
.....: }
.....: )
.....:
In [613]: df
Out[613]:
a b c d e
0 a 1 4.0 True 2013-01-01
1 b 2 5.0 False 2013-01-02
2 c 3 6.0 True 2013-01-03
In [614]: df.dtypes
Out[614]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Write to an orc file.
In [615]: df.to_orc("example_pa.orc", engine="pyarrow")
Read from an orc file.
In [616]: result = pd.read_orc("example_pa.orc")
In [617]: result.dtypes
Out[617]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Read only certain columns of an orc file.
In [618]: result = pd.read_orc(
.....: "example_pa.orc",
.....: columns=["a", "b"],
.....: )
.....:
In [619]: result.dtypes
Out[619]:
a object
b int64
dtype: object
SQL queries#
The pandas.io.sql module provides a collection of query wrappers to both
facilitate data retrieval and to reduce dependency on DB-specific API. Database abstraction
is provided by SQLAlchemy if installed. In addition you will need a driver library for
your database. Examples of such drivers are psycopg2
for PostgreSQL or pymysql for MySQL.
For SQLite this is
included in Python’s standard library by default.
You can find an overview of supported drivers for each SQL dialect in the
SQLAlchemy docs.
If SQLAlchemy is not installed, a fallback is only provided for sqlite (and
for mysql for backwards compatibility, but this is deprecated and will be
removed in a future version).
This mode requires a Python database adapter which respect the Python
DB-API.
See also some cookbook examples for some advanced strategies.
The key functions are:
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Note
The function read_sql() is a convenience wrapper around
read_sql_table() and read_sql_query() (and for
backward compatibility) and will delegate to specific function depending on
the provided input (database table name or sql query).
Table names do not need to be quoted if they have special characters.
In the following example, we use the SQlite SQL database
engine. You can use a temporary SQLite database where data are stored in
“memory”.
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
For more information on create_engine() and the URI formatting, see the examples
below and the SQLAlchemy documentation
In [620]: from sqlalchemy import create_engine
# Create your engine.
In [621]: engine = create_engine("sqlite:///:memory:")
If you want to manage your own connections you can pass one of those instead. The example below opens a
connection to the database using a Python context manager that automatically closes the connection after
the block has completed.
See the SQLAlchemy docs
for an explanation of how the database connection is handled.
with engine.connect() as conn, conn.begin():
data = pd.read_sql_table("data", conn)
Warning
When you open a connection to a database you are also responsible for closing it.
Side effects of leaving a connection open may include locking the database or
other breaking behaviour.
Writing DataFrames#
Assuming the following data is in a DataFrame data, we can insert it into
the database using to_sql().
id
Date
Col_1
Col_2
Col_3
26
2012-10-18
X
25.7
True
42
2012-10-19
Y
-12.4
False
63
2012-10-20
Z
5.73
True
In [622]: import datetime
In [623]: c = ["id", "Date", "Col_1", "Col_2", "Col_3"]
In [624]: d = [
.....: (26, datetime.datetime(2010, 10, 18), "X", 27.5, True),
.....: (42, datetime.datetime(2010, 10, 19), "Y", -12.5, False),
.....: (63, datetime.datetime(2010, 10, 20), "Z", 5.73, True),
.....: ]
.....:
In [625]: data = pd.DataFrame(d, columns=c)
In [626]: data
Out[626]:
id Date Col_1 Col_2 Col_3
0 26 2010-10-18 X 27.50 True
1 42 2010-10-19 Y -12.50 False
2 63 2010-10-20 Z 5.73 True
In [627]: data.to_sql("data", engine)
Out[627]: 3
With some databases, writing large DataFrames can result in errors due to
packet size limitations being exceeded. This can be avoided by setting the
chunksize parameter when calling to_sql. For example, the following
writes data to the database in batches of 1000 rows at a time:
In [628]: data.to_sql("data_chunked", engine, chunksize=1000)
Out[628]: 3
SQL data types#
to_sql() will try to map your data to an appropriate
SQL data type based on the dtype of the data. When you have columns of dtype
object, pandas will try to infer the data type.
You can always override the default type by specifying the desired SQL type of
any of the columns by using the dtype argument. This argument needs a
dictionary mapping column names to SQLAlchemy types (or strings for the sqlite3
fallback mode).
For example, specifying to use the sqlalchemy String type instead of the
default Text type for string columns:
In [629]: from sqlalchemy.types import String
In [630]: data.to_sql("data_dtype", engine, dtype={"Col_1": String})
Out[630]: 3
Note
Due to the limited support for timedelta’s in the different database
flavors, columns with type timedelta64 will be written as integer
values as nanoseconds to the database and a warning will be raised.
Note
Columns of category dtype will be converted to the dense representation
as you would get with np.asarray(categorical) (e.g. for string categories
this gives an array of strings).
Because of this, reading the database table back in does not generate
a categorical.
Datetime data types#
Using SQLAlchemy, to_sql() is capable of writing
datetime data that is timezone naive or timezone aware. However, the resulting
data stored in the database ultimately depends on the supported data type
for datetime data of the database system being used.
The following table lists supported data types for datetime data for some
common databases. Other database dialects may have different data types for
datetime data.
Database
SQL Datetime Types
Timezone Support
SQLite
TEXT
No
MySQL
TIMESTAMP or DATETIME
No
PostgreSQL
TIMESTAMP or TIMESTAMP WITH TIME ZONE
Yes
When writing timezone aware data to databases that do not support timezones,
the data will be written as timezone naive timestamps that are in local time
with respect to the timezone.
read_sql_table() is also capable of reading datetime data that is
timezone aware or naive. When reading TIMESTAMP WITH TIME ZONE types, pandas
will convert the data to UTC.
Insertion method#
The parameter method controls the SQL insertion clause used.
Possible values are:
None: Uses standard SQL INSERT clause (one per row).
'multi': Pass multiple values in a single INSERT clause.
It uses a special SQL syntax not supported by all backends.
This usually provides better performance for analytic databases
like Presto and Redshift, but has worse performance for
traditional SQL backend if the table contains many columns.
For more information check the SQLAlchemy documentation.
callable with signature (pd_table, conn, keys, data_iter):
This can be used to implement a more performant insertion method based on
specific backend dialect features.
Example of a callable using PostgreSQL COPY clause:
# Alternative to_sql() *method* for DBs that support COPY FROM
import csv
from io import StringIO
def psql_insert_copy(table, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
# gets a DBAPI connection that can provide a cursor
dbapi_conn = conn.connection
with dbapi_conn.cursor() as cur:
s_buf = StringIO()
writer = csv.writer(s_buf)
writer.writerows(data_iter)
s_buf.seek(0)
columns = ', '.join(['"{}"'.format(k) for k in keys])
if table.schema:
table_name = '{}.{}'.format(table.schema, table.name)
else:
table_name = table.name
sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format(
table_name, columns)
cur.copy_expert(sql=sql, file=s_buf)
Reading tables#
read_sql_table() will read a database table given the
table name and optionally a subset of columns to read.
Note
In order to use read_sql_table(), you must have the
SQLAlchemy optional dependency installed.
In [631]: pd.read_sql_table("data", engine)
Out[631]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
Note
Note that pandas infers column dtypes from query outputs, and not by looking
up data types in the physical database schema. For example, assume userid
is an integer column in a table. Then, intuitively, select userid ... will
return integer-valued series, while select cast(userid as text) ... will
return object-valued (str) series. Accordingly, if the query output is empty,
then all resulting columns will be returned as object-valued (since they are
most general). If you foresee that your query will sometimes generate an empty
result, you may want to explicitly typecast afterwards to ensure dtype
integrity.
You can also specify the name of the column as the DataFrame index,
and specify a subset of columns to be read.
In [632]: pd.read_sql_table("data", engine, index_col="id")
Out[632]:
index Date Col_1 Col_2 Col_3
id
26 0 2010-10-18 X 27.50 True
42 1 2010-10-19 Y -12.50 False
63 2 2010-10-20 Z 5.73 True
In [633]: pd.read_sql_table("data", engine, columns=["Col_1", "Col_2"])
Out[633]:
Col_1 Col_2
0 X 27.50
1 Y -12.50
2 Z 5.73
And you can explicitly force columns to be parsed as dates:
In [634]: pd.read_sql_table("data", engine, parse_dates=["Date"])
Out[634]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
If needed you can explicitly specify a format string, or a dict of arguments
to pass to pandas.to_datetime():
pd.read_sql_table("data", engine, parse_dates={"Date": "%Y-%m-%d"})
pd.read_sql_table(
"data",
engine,
parse_dates={"Date": {"format": "%Y-%m-%d %H:%M:%S"}},
)
You can check if a table exists using has_table()
Schema support#
Reading from and writing to different schema’s is supported through the schema
keyword in the read_sql_table() and to_sql()
functions. Note however that this depends on the database flavor (sqlite does not
have schema’s). For example:
df.to_sql("table", engine, schema="other_schema")
pd.read_sql_table("table", engine, schema="other_schema")
Querying#
You can query using raw SQL in the read_sql_query() function.
In this case you must use the SQL variant appropriate for your database.
When using SQLAlchemy, you can also pass SQLAlchemy Expression language constructs,
which are database-agnostic.
In [635]: pd.read_sql_query("SELECT * FROM data", engine)
Out[635]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.50 1
1 1 42 2010-10-19 00:00:00.000000 Y -12.50 0
2 2 63 2010-10-20 00:00:00.000000 Z 5.73 1
Of course, you can specify a more “complex” query.
In [636]: pd.read_sql_query("SELECT id, Col_1, Col_2 FROM data WHERE id = 42;", engine)
Out[636]:
id Col_1 Col_2
0 42 Y -12.5
The read_sql_query() function supports a chunksize argument.
Specifying this will return an iterator through chunks of the query result:
In [637]: df = pd.DataFrame(np.random.randn(20, 3), columns=list("abc"))
In [638]: df.to_sql("data_chunks", engine, index=False)
Out[638]: 20
In [639]: for chunk in pd.read_sql_query("SELECT * FROM data_chunks", engine, chunksize=5):
.....: print(chunk)
.....:
a b c
0 0.070470 0.901320 0.937577
1 0.295770 1.420548 -0.005283
2 -1.518598 -0.730065 0.226497
3 -2.061465 0.632115 0.853619
4 2.719155 0.139018 0.214557
a b c
0 -1.538924 -0.366973 -0.748801
1 -0.478137 -1.559153 -3.097759
2 -2.320335 -0.221090 0.119763
3 0.608228 1.064810 -0.780506
4 -2.736887 0.143539 1.170191
a b c
0 -1.573076 0.075792 -1.722223
1 -0.774650 0.803627 0.221665
2 0.584637 0.147264 1.057825
3 -0.284136 0.912395 1.552808
4 0.189376 -0.109830 0.539341
a b c
0 0.592591 -0.155407 -1.356475
1 0.833837 1.524249 1.606722
2 -0.029487 -0.051359 1.700152
3 0.921484 -0.926347 0.979818
4 0.182380 -0.186376 0.049820
You can also run a plain query without creating a DataFrame with
execute(). This is useful for queries that don’t return values,
such as INSERT. This is functionally equivalent to calling execute on the
SQLAlchemy engine or db connection object. Again, you must use the SQL syntax
variant appropriate for your database.
from pandas.io import sql
sql.execute("SELECT * FROM table_name", engine)
sql.execute(
"INSERT INTO table_name VALUES(?, ?, ?)", engine, params=[("id", 1, 12.2, True)]
)
Engine connection examples#
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
from sqlalchemy import create_engine
engine = create_engine("postgresql://scott:[email protected]:5432/mydatabase")
engine = create_engine("mysql+mysqldb://scott:[email protected]/foo")
engine = create_engine("oracle://scott:[email protected]:1521/sidname")
engine = create_engine("mssql+pyodbc://mydsn")
# sqlite://<nohostname>/<path>
# where <path> is relative:
engine = create_engine("sqlite:///foo.db")
# or absolute, starting with a slash:
engine = create_engine("sqlite:////absolute/path/to/foo.db")
For more information see the examples the SQLAlchemy documentation
Advanced SQLAlchemy queries#
You can use SQLAlchemy constructs to describe your query.
Use sqlalchemy.text() to specify query parameters in a backend-neutral way
In [640]: import sqlalchemy as sa
In [641]: pd.read_sql(
.....: sa.text("SELECT * FROM data where Col_1=:col1"), engine, params={"col1": "X"}
.....: )
.....:
Out[641]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.5 1
If you have an SQLAlchemy description of your database you can express where conditions using SQLAlchemy expressions
In [642]: metadata = sa.MetaData()
In [643]: data_table = sa.Table(
.....: "data",
.....: metadata,
.....: sa.Column("index", sa.Integer),
.....: sa.Column("Date", sa.DateTime),
.....: sa.Column("Col_1", sa.String),
.....: sa.Column("Col_2", sa.Float),
.....: sa.Column("Col_3", sa.Boolean),
.....: )
.....:
In [644]: pd.read_sql(sa.select([data_table]).where(data_table.c.Col_3 is True), engine)
Out[644]:
Empty DataFrame
Columns: [index, Date, Col_1, Col_2, Col_3]
Index: []
You can combine SQLAlchemy expressions with parameters passed to read_sql() using sqlalchemy.bindparam()
In [645]: import datetime as dt
In [646]: expr = sa.select([data_table]).where(data_table.c.Date > sa.bindparam("date"))
In [647]: pd.read_sql(expr, engine, params={"date": dt.datetime(2010, 10, 18)})
Out[647]:
index Date Col_1 Col_2 Col_3
0 1 2010-10-19 Y -12.50 False
1 2 2010-10-20 Z 5.73 True
Sqlite fallback#
The use of sqlite is supported without using SQLAlchemy.
This mode requires a Python database adapter which respect the Python
DB-API.
You can create connections like so:
import sqlite3
con = sqlite3.connect(":memory:")
And then issue the following queries:
data.to_sql("data", con)
pd.read_sql_query("SELECT * FROM data", con)
Google BigQuery#
Warning
Starting in 0.20.0, pandas has split off Google BigQuery support into the
separate package pandas-gbq. You can pip install pandas-gbq to get it.
The pandas-gbq package provides functionality to read/write from Google BigQuery.
pandas integrates with this external package. if pandas-gbq is installed, you can
use the pandas methods pd.read_gbq and DataFrame.to_gbq, which will call the
respective functions from pandas-gbq.
Full documentation can be found here.
Stata format#
Writing to stata format#
The method to_stata() will write a DataFrame
into a .dta file. The format version of this file is always 115 (Stata 12).
In [648]: df = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [649]: df.to_stata("stata.dta")
Stata data files have limited data type support; only strings with
244 or fewer characters, int8, int16, int32, float32
and float64 can be stored in .dta files. Additionally,
Stata reserves certain values to represent missing data. Exporting a
non-missing value that is outside of the permitted range in Stata for
a particular data type will retype the variable to the next larger
size. For example, int8 values are restricted to lie between -127
and 100 in Stata, and so variables with values above 100 will trigger
a conversion to int16. nan values in floating points data
types are stored as the basic missing data type (. in Stata).
Note
It is not possible to export missing data values for integer data types.
The Stata writer gracefully handles other data types including int64,
bool, uint8, uint16, uint32 by casting to
the smallest supported type that can represent the data. For example, data
with a type of uint8 will be cast to int8 if all values are less than
100 (the upper bound for non-missing int8 data in Stata), or, if values are
outside of this range, the variable is cast to int16.
Warning
Conversion from int64 to float64 may result in a loss of precision
if int64 values are larger than 2**53.
Warning
StataWriter and
to_stata() only support fixed width
strings containing up to 244 characters, a limitation imposed by the version
115 dta file format. Attempting to write Stata dta files with strings
longer than 244 characters raises a ValueError.
Reading from Stata format#
The top-level function read_stata will read a dta file and return
either a DataFrame or a StataReader that can
be used to read the file incrementally.
In [650]: pd.read_stata("stata.dta")
Out[650]:
index A B
0 0 -1.690072 0.405144
1 1 -1.511309 -1.531396
2 2 0.572698 -1.106845
3 3 -1.185859 0.174564
4 4 0.603797 -1.796129
5 5 -0.791679 1.173795
6 6 -0.277710 1.859988
7 7 -0.258413 1.251808
8 8 1.443262 0.441553
9 9 1.168163 -2.054946
Specifying a chunksize yields a
StataReader instance that can be used to
read chunksize lines from the file at a time. The StataReader
object can be used as an iterator.
In [651]: with pd.read_stata("stata.dta", chunksize=3) as reader:
.....: for df in reader:
.....: print(df.shape)
.....:
(3, 3)
(3, 3)
(3, 3)
(1, 3)
For more fine-grained control, use iterator=True and specify
chunksize with each call to
read().
In [652]: with pd.read_stata("stata.dta", iterator=True) as reader:
.....: chunk1 = reader.read(5)
.....: chunk2 = reader.read(5)
.....:
Currently the index is retrieved as a column.
The parameter convert_categoricals indicates whether value labels should be
read and used to create a Categorical variable from them. Value labels can
also be retrieved by the function value_labels, which requires read()
to be called before use.
The parameter convert_missing indicates whether missing value
representations in Stata should be preserved. If False (the default),
missing values are represented as np.nan. If True, missing values are
represented using StataMissingValue objects, and columns containing missing
values will have object data type.
Note
read_stata() and
StataReader support .dta formats 113-115
(Stata 10-12), 117 (Stata 13), and 118 (Stata 14).
Note
Setting preserve_dtypes=False will upcast to the standard pandas data types:
int64 for all integer types and float64 for floating point data. By default,
the Stata data types are preserved when importing.
Categorical data#
Categorical data can be exported to Stata data files as value labeled data.
The exported data consists of the underlying category codes as integer data values
and the categories as value labels. Stata does not have an explicit equivalent
to a Categorical and information about whether the variable is ordered
is lost when exporting.
Warning
Stata only supports string value labels, and so str is called on the
categories when exporting data. Exporting Categorical variables with
non-string categories produces a warning, and can result a loss of
information if the str representations of the categories are not unique.
Labeled data can similarly be imported from Stata data files as Categorical
variables using the keyword argument convert_categoricals (True by default).
The keyword argument order_categoricals (True by default) determines
whether imported Categorical variables are ordered.
Note
When importing categorical data, the values of the variables in the Stata
data file are not preserved since Categorical variables always
use integer data types between -1 and n-1 where n is the number
of categories. If the original values in the Stata data file are required,
these can be imported by setting convert_categoricals=False, which will
import original data (but not the variable labels). The original values can
be matched to the imported categorical data since there is a simple mapping
between the original Stata data values and the category codes of imported
Categorical variables: missing values are assigned code -1, and the
smallest original value is assigned 0, the second smallest is assigned
1 and so on until the largest original value is assigned the code n-1.
Note
Stata supports partially labeled series. These series have value labels for
some but not all data values. Importing a partially labeled series will produce
a Categorical with string categories for the values that are labeled and
numeric categories for values with no label.
SAS formats#
The top-level function read_sas() can read (but not write) SAS
XPORT (.xpt) and (since v0.18.0) SAS7BDAT (.sas7bdat) format files.
SAS files only contain two value types: ASCII text and floating point
values (usually 8 bytes but sometimes truncated). For xport files,
there is no automatic type conversion to integers, dates, or
categoricals. For SAS7BDAT files, the format codes may allow date
variables to be automatically converted to dates. By default the
whole file is read and returned as a DataFrame.
Specify a chunksize or use iterator=True to obtain reader
objects (XportReader or SAS7BDATReader) for incrementally
reading the file. The reader objects also have attributes that
contain additional information about the file and its variables.
Read a SAS7BDAT file:
df = pd.read_sas("sas_data.sas7bdat")
Obtain an iterator and read an XPORT file 100,000 lines at a time:
def do_something(chunk):
pass
with pd.read_sas("sas_xport.xpt", chunk=100000) as rdr:
for chunk in rdr:
do_something(chunk)
The specification for the xport file format is available from the SAS
web site.
No official documentation is available for the SAS7BDAT format.
SPSS formats#
New in version 0.25.0.
The top-level function read_spss() can read (but not write) SPSS
SAV (.sav) and ZSAV (.zsav) format files.
SPSS files contain column names. By default the
whole file is read, categorical columns are converted into pd.Categorical,
and a DataFrame with all columns is returned.
Specify the usecols parameter to obtain a subset of columns. Specify convert_categoricals=False
to avoid converting categorical columns into pd.Categorical.
Read an SPSS file:
df = pd.read_spss("spss_data.sav")
Extract a subset of columns contained in usecols from an SPSS file and
avoid converting categorical columns into pd.Categorical:
df = pd.read_spss(
"spss_data.sav",
usecols=["foo", "bar"],
convert_categoricals=False,
)
More information about the SAV and ZSAV file formats is available here.
Other file formats#
pandas itself only supports IO with a limited set of file formats that map
cleanly to its tabular data model. For reading and writing other file formats
into and from pandas, we recommend these packages from the broader community.
netCDF#
xarray provides data structures inspired by the pandas DataFrame for working
with multi-dimensional datasets, with a focus on the netCDF file format and
easy conversion to and from pandas.
Performance considerations#
This is an informal comparison of various IO methods, using pandas
0.24.2. Timings are machine dependent and small differences should be
ignored.
In [1]: sz = 1000000
In [2]: df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz})
In [3]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 2 columns):
A 1000000 non-null float64
B 1000000 non-null int64
dtypes: float64(1), int64(1)
memory usage: 15.3 MB
The following test functions will be used below to compare the performance of several IO methods:
import numpy as np
import os
sz = 1000000
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
sz = 1000000
np.random.seed(42)
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
def test_sql_write(df):
if os.path.exists("test.sql"):
os.remove("test.sql")
sql_db = sqlite3.connect("test.sql")
df.to_sql(name="test_table", con=sql_db)
sql_db.close()
def test_sql_read():
sql_db = sqlite3.connect("test.sql")
pd.read_sql_query("select * from test_table", sql_db)
sql_db.close()
def test_hdf_fixed_write(df):
df.to_hdf("test_fixed.hdf", "test", mode="w")
def test_hdf_fixed_read():
pd.read_hdf("test_fixed.hdf", "test")
def test_hdf_fixed_write_compress(df):
df.to_hdf("test_fixed_compress.hdf", "test", mode="w", complib="blosc")
def test_hdf_fixed_read_compress():
pd.read_hdf("test_fixed_compress.hdf", "test")
def test_hdf_table_write(df):
df.to_hdf("test_table.hdf", "test", mode="w", format="table")
def test_hdf_table_read():
pd.read_hdf("test_table.hdf", "test")
def test_hdf_table_write_compress(df):
df.to_hdf(
"test_table_compress.hdf", "test", mode="w", complib="blosc", format="table"
)
def test_hdf_table_read_compress():
pd.read_hdf("test_table_compress.hdf", "test")
def test_csv_write(df):
df.to_csv("test.csv", mode="w")
def test_csv_read():
pd.read_csv("test.csv", index_col=0)
def test_feather_write(df):
df.to_feather("test.feather")
def test_feather_read():
pd.read_feather("test.feather")
def test_pickle_write(df):
df.to_pickle("test.pkl")
def test_pickle_read():
pd.read_pickle("test.pkl")
def test_pickle_write_compress(df):
df.to_pickle("test.pkl.compress", compression="xz")
def test_pickle_read_compress():
pd.read_pickle("test.pkl.compress", compression="xz")
def test_parquet_write(df):
df.to_parquet("test.parquet")
def test_parquet_read():
pd.read_parquet("test.parquet")
When writing, the top three functions in terms of speed are test_feather_write, test_hdf_fixed_write and test_hdf_fixed_write_compress.
In [4]: %timeit test_sql_write(df)
3.29 s ± 43.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [5]: %timeit test_hdf_fixed_write(df)
19.4 ms ± 560 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: %timeit test_hdf_fixed_write_compress(df)
19.6 ms ± 308 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [7]: %timeit test_hdf_table_write(df)
449 ms ± 5.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: %timeit test_hdf_table_write_compress(df)
448 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [9]: %timeit test_csv_write(df)
3.66 s ± 26.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [10]: %timeit test_feather_write(df)
9.75 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [11]: %timeit test_pickle_write(df)
30.1 ms ± 229 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [12]: %timeit test_pickle_write_compress(df)
4.29 s ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [13]: %timeit test_parquet_write(df)
67.6 ms ± 706 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
When reading, the top three functions in terms of speed are test_feather_read, test_pickle_read and
test_hdf_fixed_read.
In [14]: %timeit test_sql_read()
1.77 s ± 17.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: %timeit test_hdf_fixed_read()
19.4 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [16]: %timeit test_hdf_fixed_read_compress()
19.5 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [17]: %timeit test_hdf_table_read()
38.6 ms ± 857 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [18]: %timeit test_hdf_table_read_compress()
38.8 ms ± 1.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [19]: %timeit test_csv_read()
452 ms ± 9.04 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [20]: %timeit test_feather_read()
12.4 ms ± 99.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: %timeit test_pickle_read()
18.4 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: %timeit test_pickle_read_compress()
915 ms ± 7.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [23]: %timeit test_parquet_read()
24.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The files test.pkl.compress, test.parquet and test.feather took the least space on disk (in bytes).
29519500 Oct 10 06:45 test.csv
16000248 Oct 10 06:45 test.feather
8281983 Oct 10 06:49 test.parquet
16000857 Oct 10 06:47 test.pkl
7552144 Oct 10 06:48 test.pkl.compress
34816000 Oct 10 06:42 test.sql
24009288 Oct 10 06:43 test_fixed.hdf
24009288 Oct 10 06:43 test_fixed_compress.hdf
24458940 Oct 10 06:44 test_table.hdf
24458940 Oct 10 06:44 test_table_compress.hdf
| 136
| 1,123
|
Converting a collection of txt files into ONE CSV file by using Python
I have a folder with multiple txt files. Each file contain information of a client of my friend's business that he entered manually from a hardcopy document. These information can be e-mails, addresses, requestID, etc. Each time he get a new client he creates a new txt file in that folder.
Using Python, I want to create a CSV file that contain all information about all clients from the txt files so that I can open it on Excel. The files content looks like this:
Date:24/02/2021
Email:*****@gmail.com
Product:Hard Drives
Type:Sandisk
Size:128GB
Some files have additional information. And each file is labeled by an ID (which is the name of the txt file).
What I'm thinking of is to make the code creates a dictionary for each file. Each dict will be named by the name of the txt file. The data types (date,email,product.etc) will be the indexes and (keep in mind that not all files has the same number of indexes as some files have more or less data than others) then there are the values. And then convert this collection of dicts into one CSV file that when opened in Excel should look like this:
FileID
Date
Email
Address
Product
Type
Color
Size
01-2021
02-01-2021
Hard Drive
SanDisk
128GB
Is this a good way to achieve this goal? or there is a shorter and more effective one?
This code by @dukkee seems to logically fulfill the task required:
import os
import pandas as pd
FOLDER_PATH = "folder_path"
raw_data = []
for filename in os.listdir(FOLDER_PATH):
with open(os.path.join(FOLDER_PATH, filename)) as fp:
file_data = dict(line.split(":", 1) for line in fp if line)
file_data["FileID"] = filename
raw_data.append(file_data)
frame = pd.DataFrame(raw_data)
frame.to_csv("output.csv", index=False)
However, it keeps showing me this error:
The following code by @dm2 should also work but it also shows me an error which I couldn't figure why:
import pandas as pd
import os
files = os.listdir('test/')
df_list = [pd.read_csv(f'test/{file}', sep = ':', header = None).set_index(0).T for file in files]
df_out = pd.concat(df_list)
# to reindex by filename
df_out.index = [file.strip('.txt') for file in files]
I made sure that all txt files has no empty lines but this wasn't the solution for these errors.
|
65,656,902
|
A question about how to make calculus after making a group by pandas
|
<p>I'm working with a Data Frame with categorical values where my input DataFrame is below:</p>
<p>df</p>
<pre><code> Age Gender Smoke
18 Female Yes
24 Female No
18 Female Yes
34 Male Yes
34 Male No
</code></pre>
<p>I want to groupby my DataFrame based on columns "Age" and "Gender" where "Occurrence" column calculates the frequency of each selection and then, I want to create two other columns "Smoke Yes" that calculates number of smoking people based on the selection and "Smoke No" that calculates number of non smoking people</p>
<pre><code>Age Gender Occurence Smoke Yes Smoke No
18 Woman 2 0.50 0.50
24 Woman 1 0 1
34 Man 2 0.5 0.5
</code></pre>
<p>In order to do that, I used the following code</p>
<pre><code>#Group and sort
df1=df.groupby(['Age', 'Gender']).size().reset_index(name='Frequency').sort_values('Frequency', ascending=False)
#Delete index
df1.reset_index(drop=True,inplace=True)
</code></pre>
<p>However the df['Smoke'] column is disappeared so I can't continue my calculus. Does any one have an idea and what can I do to obtain like the output DataFrame?</p>
| 65,657,000
| 2021-01-10T18:25:16.317000
| 1
| null | 1
| 77
|
python|pandas
|
<p>you can use <code>groupby</code> and <code>value_counts</code> with <code>normalize=True</code> to return percentage share. then <code>unstack</code>. Also using a dictionary you can replace the <code>Gender</code> column to match the desired output.</p>
<pre><code>d = {"Female":"Woman","Male":"Man"}
u = (df.groupby(['Age','Gender'])['Smoke'].value_counts(normalize=True)
.unstack().fillna(0))
s = df.groupby("Age")['Gender'].value_counts()
u.columns = u.columns.name+"_"+u.columns
out=u.rename_axis(None,axis=1).assign(Occurance=s).reset_index().replace({"Gender":d})
</code></pre>
<hr />
<pre><code>print(out)
Age Gender Smoke_No Smoke_Yes Occurance
0 18 Woman 0.0 1.0 2
1 24 Woman 1.0 0.0 1
2 34 Man 0.5 0.5 2
</code></pre>
| 2021-01-10T18:34:17.010000
| 1
|
https://pandas.pydata.org/docs/user_guide/dsintro.html
|
Intro to data structures#
Intro to data structures#
We’ll start with a quick, non-comprehensive overview of the fundamental data
structures in pandas to get you started. The fundamental behavior about data
types, indexing, axis labeling, and alignment apply across all of the
objects. To get started, import NumPy and load pandas into your namespace:
In [1]: import numpy as np
In [2]: import pandas as pd
Fundamentally, data alignment is intrinsic. The link
between labels and data will not be broken unless done so explicitly by you.
you can use groupby and value_counts with normalize=True to return percentage share. then unstack. Also using a dictionary you can replace the Gender column to match the desired output.
d = {"Female":"Woman","Male":"Man"}
u = (df.groupby(['Age','Gender'])['Smoke'].value_counts(normalize=True)
.unstack().fillna(0))
s = df.groupby("Age")['Gender'].value_counts()
u.columns = u.columns.name+"_"+u.columns
out=u.rename_axis(None,axis=1).assign(Occurance=s).reset_index().replace({"Gender":d})
print(out)
Age Gender Smoke_No Smoke_Yes Occurance
0 18 Woman 0.0 1.0 2
1 24 Woman 1.0 0.0 1
2 34 Man 0.5 0.5 2
We’ll give a brief intro to the data structures, then consider all of the broad
categories of functionality and methods in separate sections.
Series#
Series is a one-dimensional labeled array capable of holding any data
type (integers, strings, floating point numbers, Python objects, etc.). The axis
labels are collectively referred to as the index. The basic method to create a Series is to call:
>>> s = pd.Series(data, index=index)
Here, data can be many different things:
a Python dict
an ndarray
a scalar value (like 5)
The passed index is a list of axis labels. Thus, this separates into a few
cases depending on what data is:
From ndarray
If data is an ndarray, index must be the same length as data. If no
index is passed, one will be created having values [0, ..., len(data) - 1].
In [3]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [4]: s
Out[4]:
a 0.469112
b -0.282863
c -1.509059
d -1.135632
e 1.212112
dtype: float64
In [5]: s.index
Out[5]: Index(['a', 'b', 'c', 'd', 'e'], dtype='object')
In [6]: pd.Series(np.random.randn(5))
Out[6]:
0 -0.173215
1 0.119209
2 -1.044236
3 -0.861849
4 -2.104569
dtype: float64
Note
pandas supports non-unique index values. If an operation
that does not support duplicate index values is attempted, an exception
will be raised at that time.
From dict
Series can be instantiated from dicts:
In [7]: d = {"b": 1, "a": 0, "c": 2}
In [8]: pd.Series(d)
Out[8]:
b 1
a 0
c 2
dtype: int64
If an index is passed, the values in data corresponding to the labels in the
index will be pulled out.
In [9]: d = {"a": 0.0, "b": 1.0, "c": 2.0}
In [10]: pd.Series(d)
Out[10]:
a 0.0
b 1.0
c 2.0
dtype: float64
In [11]: pd.Series(d, index=["b", "c", "d", "a"])
Out[11]:
b 1.0
c 2.0
d NaN
a 0.0
dtype: float64
Note
NaN (not a number) is the standard missing data marker used in pandas.
From scalar value
If data is a scalar value, an index must be
provided. The value will be repeated to match the length of index.
In [12]: pd.Series(5.0, index=["a", "b", "c", "d", "e"])
Out[12]:
a 5.0
b 5.0
c 5.0
d 5.0
e 5.0
dtype: float64
Series is ndarray-like#
Series acts very similarly to a ndarray and is a valid argument to most NumPy functions.
However, operations such as slicing will also slice the index.
In [13]: s[0]
Out[13]: 0.4691122999071863
In [14]: s[:3]
Out[14]:
a 0.469112
b -0.282863
c -1.509059
dtype: float64
In [15]: s[s > s.median()]
Out[15]:
a 0.469112
e 1.212112
dtype: float64
In [16]: s[[4, 3, 1]]
Out[16]:
e 1.212112
d -1.135632
b -0.282863
dtype: float64
In [17]: np.exp(s)
Out[17]:
a 1.598575
b 0.753623
c 0.221118
d 0.321219
e 3.360575
dtype: float64
Note
We will address array-based indexing like s[[4, 3, 1]]
in section on indexing.
Like a NumPy array, a pandas Series has a single dtype.
In [18]: s.dtype
Out[18]: dtype('float64')
This is often a NumPy dtype. However, pandas and 3rd-party libraries
extend NumPy’s type system in a few places, in which case the dtype would
be an ExtensionDtype. Some examples within
pandas are Categorical data and Nullable integer data type. See dtypes
for more.
If you need the actual array backing a Series, use Series.array.
In [19]: s.array
Out[19]:
<PandasArray>
[ 0.4691122999071863, -0.2828633443286633, -1.5090585031735124,
-1.1356323710171934, 1.2121120250208506]
Length: 5, dtype: float64
Accessing the array can be useful when you need to do some operation without the
index (to disable automatic alignment, for example).
Series.array will always be an ExtensionArray.
Briefly, an ExtensionArray is a thin wrapper around one or more concrete arrays like a
numpy.ndarray. pandas knows how to take an ExtensionArray and
store it in a Series or a column of a DataFrame.
See dtypes for more.
While Series is ndarray-like, if you need an actual ndarray, then use
Series.to_numpy().
In [20]: s.to_numpy()
Out[20]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
Even if the Series is backed by a ExtensionArray,
Series.to_numpy() will return a NumPy ndarray.
Series is dict-like#
A Series is also like a fixed-size dict in that you can get and set values by index
label:
In [21]: s["a"]
Out[21]: 0.4691122999071863
In [22]: s["e"] = 12.0
In [23]: s
Out[23]:
a 0.469112
b -0.282863
c -1.509059
d -1.135632
e 12.000000
dtype: float64
In [24]: "e" in s
Out[24]: True
In [25]: "f" in s
Out[25]: False
If a label is not contained in the index, an exception is raised:
In [26]: s["f"]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/work/pandas/pandas/pandas/core/indexes/base.py:3802, in Index.get_loc(self, key, method, tolerance)
3801 try:
-> 3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
File ~/work/pandas/pandas/pandas/_libs/index.pyx:138, in pandas._libs.index.IndexEngine.get_loc()
File ~/work/pandas/pandas/pandas/_libs/index.pyx:165, in pandas._libs.index.IndexEngine.get_loc()
File ~/work/pandas/pandas/pandas/_libs/hashtable_class_helper.pxi:5745, in pandas._libs.hashtable.PyObjectHashTable.get_item()
File ~/work/pandas/pandas/pandas/_libs/hashtable_class_helper.pxi:5753, in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'f'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[26], line 1
----> 1 s["f"]
File ~/work/pandas/pandas/pandas/core/series.py:981, in Series.__getitem__(self, key)
978 return self._values[key]
980 elif key_is_scalar:
--> 981 return self._get_value(key)
983 if is_hashable(key):
984 # Otherwise index.get_value will raise InvalidIndexError
985 try:
986 # For labels that don't resolve as scalars like tuples and frozensets
File ~/work/pandas/pandas/pandas/core/series.py:1089, in Series._get_value(self, label, takeable)
1086 return self._values[label]
1088 # Similar to Index.get_value, but we do not fall back to positional
-> 1089 loc = self.index.get_loc(label)
1090 return self.index._get_values_for_loc(self, loc, label)
File ~/work/pandas/pandas/pandas/core/indexes/base.py:3804, in Index.get_loc(self, key, method, tolerance)
3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
-> 3804 raise KeyError(key) from err
3805 except TypeError:
3806 # If we have a listlike key, _check_indexing_error will raise
3807 # InvalidIndexError. Otherwise we fall through and re-raise
3808 # the TypeError.
3809 self._check_indexing_error(key)
KeyError: 'f'
Using the Series.get() method, a missing label will return None or specified default:
In [27]: s.get("f")
In [28]: s.get("f", np.nan)
Out[28]: nan
These labels can also be accessed by attribute.
Vectorized operations and label alignment with Series#
When working with raw NumPy arrays, looping through value-by-value is usually
not necessary. The same is true when working with Series in pandas.
Series can also be passed into most NumPy methods expecting an ndarray.
In [29]: s + s
Out[29]:
a 0.938225
b -0.565727
c -3.018117
d -2.271265
e 24.000000
dtype: float64
In [30]: s * 2
Out[30]:
a 0.938225
b -0.565727
c -3.018117
d -2.271265
e 24.000000
dtype: float64
In [31]: np.exp(s)
Out[31]:
a 1.598575
b 0.753623
c 0.221118
d 0.321219
e 162754.791419
dtype: float64
A key difference between Series and ndarray is that operations between Series
automatically align the data based on label. Thus, you can write computations
without giving consideration to whether the Series involved have the same
labels.
In [32]: s[1:] + s[:-1]
Out[32]:
a NaN
b -0.565727
c -3.018117
d -2.271265
e NaN
dtype: float64
The result of an operation between unaligned Series will have the union of
the indexes involved. If a label is not found in one Series or the other, the
result will be marked as missing NaN. Being able to write code without doing
any explicit data alignment grants immense freedom and flexibility in
interactive data analysis and research. The integrated data alignment features
of the pandas data structures set pandas apart from the majority of related
tools for working with labeled data.
Note
In general, we chose to make the default result of operations between
differently indexed objects yield the union of the indexes in order to
avoid loss of information. Having an index label, though the data is
missing, is typically important information as part of a computation. You
of course have the option of dropping labels with missing data via the
dropna function.
Name attribute#
Series also has a name attribute:
In [33]: s = pd.Series(np.random.randn(5), name="something")
In [34]: s
Out[34]:
0 -0.494929
1 1.071804
2 0.721555
3 -0.706771
4 -1.039575
Name: something, dtype: float64
In [35]: s.name
Out[35]: 'something'
The Series name can be assigned automatically in many cases, in particular,
when selecting a single column from a DataFrame, the name will be assigned
the column label.
You can rename a Series with the pandas.Series.rename() method.
In [36]: s2 = s.rename("different")
In [37]: s2.name
Out[37]: 'different'
Note that s and s2 refer to different objects.
DataFrame#
DataFrame is a 2-dimensional labeled data structure with columns of
potentially different types. You can think of it like a spreadsheet or SQL
table, or a dict of Series objects. It is generally the most commonly used
pandas object. Like Series, DataFrame accepts many different kinds of input:
Dict of 1D ndarrays, lists, dicts, or Series
2-D numpy.ndarray
Structured or record ndarray
A Series
Another DataFrame
Along with the data, you can optionally pass index (row labels) and
columns (column labels) arguments. If you pass an index and / or columns,
you are guaranteeing the index and / or columns of the resulting
DataFrame. Thus, a dict of Series plus a specific index will discard all data
not matching up to the passed index.
If axis labels are not passed, they will be constructed from the input data
based on common sense rules.
From dict of Series or dicts#
The resulting index will be the union of the indexes of the various
Series. If there are any nested dicts, these will first be converted to
Series. If no columns are passed, the columns will be the ordered list of dict
keys.
In [38]: d = {
....: "one": pd.Series([1.0, 2.0, 3.0], index=["a", "b", "c"]),
....: "two": pd.Series([1.0, 2.0, 3.0, 4.0], index=["a", "b", "c", "d"]),
....: }
....:
In [39]: df = pd.DataFrame(d)
In [40]: df
Out[40]:
one two
a 1.0 1.0
b 2.0 2.0
c 3.0 3.0
d NaN 4.0
In [41]: pd.DataFrame(d, index=["d", "b", "a"])
Out[41]:
one two
d NaN 4.0
b 2.0 2.0
a 1.0 1.0
In [42]: pd.DataFrame(d, index=["d", "b", "a"], columns=["two", "three"])
Out[42]:
two three
d 4.0 NaN
b 2.0 NaN
a 1.0 NaN
The row and column labels can be accessed respectively by accessing the
index and columns attributes:
Note
When a particular set of columns is passed along with a dict of data, the
passed columns override the keys in the dict.
In [43]: df.index
Out[43]: Index(['a', 'b', 'c', 'd'], dtype='object')
In [44]: df.columns
Out[44]: Index(['one', 'two'], dtype='object')
From dict of ndarrays / lists#
The ndarrays must all be the same length. If an index is passed, it must
also be the same length as the arrays. If no index is passed, the
result will be range(n), where n is the array length.
In [45]: d = {"one": [1.0, 2.0, 3.0, 4.0], "two": [4.0, 3.0, 2.0, 1.0]}
In [46]: pd.DataFrame(d)
Out[46]:
one two
0 1.0 4.0
1 2.0 3.0
2 3.0 2.0
3 4.0 1.0
In [47]: pd.DataFrame(d, index=["a", "b", "c", "d"])
Out[47]:
one two
a 1.0 4.0
b 2.0 3.0
c 3.0 2.0
d 4.0 1.0
From structured or record array#
This case is handled identically to a dict of arrays.
In [48]: data = np.zeros((2,), dtype=[("A", "i4"), ("B", "f4"), ("C", "a10")])
In [49]: data[:] = [(1, 2.0, "Hello"), (2, 3.0, "World")]
In [50]: pd.DataFrame(data)
Out[50]:
A B C
0 1 2.0 b'Hello'
1 2 3.0 b'World'
In [51]: pd.DataFrame(data, index=["first", "second"])
Out[51]:
A B C
first 1 2.0 b'Hello'
second 2 3.0 b'World'
In [52]: pd.DataFrame(data, columns=["C", "A", "B"])
Out[52]:
C A B
0 b'Hello' 1 2.0
1 b'World' 2 3.0
Note
DataFrame is not intended to work exactly like a 2-dimensional NumPy
ndarray.
From a list of dicts#
In [53]: data2 = [{"a": 1, "b": 2}, {"a": 5, "b": 10, "c": 20}]
In [54]: pd.DataFrame(data2)
Out[54]:
a b c
0 1 2 NaN
1 5 10 20.0
In [55]: pd.DataFrame(data2, index=["first", "second"])
Out[55]:
a b c
first 1 2 NaN
second 5 10 20.0
In [56]: pd.DataFrame(data2, columns=["a", "b"])
Out[56]:
a b
0 1 2
1 5 10
From a dict of tuples#
You can automatically create a MultiIndexed frame by passing a tuples
dictionary.
In [57]: pd.DataFrame(
....: {
....: ("a", "b"): {("A", "B"): 1, ("A", "C"): 2},
....: ("a", "a"): {("A", "C"): 3, ("A", "B"): 4},
....: ("a", "c"): {("A", "B"): 5, ("A", "C"): 6},
....: ("b", "a"): {("A", "C"): 7, ("A", "B"): 8},
....: ("b", "b"): {("A", "D"): 9, ("A", "B"): 10},
....: }
....: )
....:
Out[57]:
a b
b a c a b
A B 1.0 4.0 5.0 8.0 10.0
C 2.0 3.0 6.0 7.0 NaN
D NaN NaN NaN NaN 9.0
From a Series#
The result will be a DataFrame with the same index as the input Series, and
with one column whose name is the original name of the Series (only if no other
column name provided).
In [58]: ser = pd.Series(range(3), index=list("abc"), name="ser")
In [59]: pd.DataFrame(ser)
Out[59]:
ser
a 0
b 1
c 2
From a list of namedtuples#
The field names of the first namedtuple in the list determine the columns
of the DataFrame. The remaining namedtuples (or tuples) are simply unpacked
and their values are fed into the rows of the DataFrame. If any of those
tuples is shorter than the first namedtuple then the later columns in the
corresponding row are marked as missing values. If any are longer than the
first namedtuple, a ValueError is raised.
In [60]: from collections import namedtuple
In [61]: Point = namedtuple("Point", "x y")
In [62]: pd.DataFrame([Point(0, 0), Point(0, 3), (2, 3)])
Out[62]:
x y
0 0 0
1 0 3
2 2 3
In [63]: Point3D = namedtuple("Point3D", "x y z")
In [64]: pd.DataFrame([Point3D(0, 0, 0), Point3D(0, 3, 5), Point(2, 3)])
Out[64]:
x y z
0 0 0 0.0
1 0 3 5.0
2 2 3 NaN
From a list of dataclasses#
New in version 1.1.0.
Data Classes as introduced in PEP557,
can be passed into the DataFrame constructor.
Passing a list of dataclasses is equivalent to passing a list of dictionaries.
Please be aware, that all values in the list should be dataclasses, mixing
types in the list would result in a TypeError.
In [65]: from dataclasses import make_dataclass
In [66]: Point = make_dataclass("Point", [("x", int), ("y", int)])
In [67]: pd.DataFrame([Point(0, 0), Point(0, 3), Point(2, 3)])
Out[67]:
x y
0 0 0
1 0 3
2 2 3
Missing data
To construct a DataFrame with missing data, we use np.nan to
represent missing values. Alternatively, you may pass a numpy.MaskedArray
as the data argument to the DataFrame constructor, and its masked entries will
be considered missing. See Missing data for more.
Alternate constructors#
DataFrame.from_dict
DataFrame.from_dict() takes a dict of dicts or a dict of array-like sequences
and returns a DataFrame. It operates like the DataFrame constructor except
for the orient parameter which is 'columns' by default, but which can be
set to 'index' in order to use the dict keys as row labels.
In [68]: pd.DataFrame.from_dict(dict([("A", [1, 2, 3]), ("B", [4, 5, 6])]))
Out[68]:
A B
0 1 4
1 2 5
2 3 6
If you pass orient='index', the keys will be the row labels. In this
case, you can also pass the desired column names:
In [69]: pd.DataFrame.from_dict(
....: dict([("A", [1, 2, 3]), ("B", [4, 5, 6])]),
....: orient="index",
....: columns=["one", "two", "three"],
....: )
....:
Out[69]:
one two three
A 1 2 3
B 4 5 6
DataFrame.from_records
DataFrame.from_records() takes a list of tuples or an ndarray with structured
dtype. It works analogously to the normal DataFrame constructor, except that
the resulting DataFrame index may be a specific field of the structured
dtype.
In [70]: data
Out[70]:
array([(1, 2., b'Hello'), (2, 3., b'World')],
dtype=[('A', '<i4'), ('B', '<f4'), ('C', 'S10')])
In [71]: pd.DataFrame.from_records(data, index="C")
Out[71]:
A B
C
b'Hello' 1 2.0
b'World' 2 3.0
Column selection, addition, deletion#
You can treat a DataFrame semantically like a dict of like-indexed Series
objects. Getting, setting, and deleting columns works with the same syntax as
the analogous dict operations:
In [72]: df["one"]
Out[72]:
a 1.0
b 2.0
c 3.0
d NaN
Name: one, dtype: float64
In [73]: df["three"] = df["one"] * df["two"]
In [74]: df["flag"] = df["one"] > 2
In [75]: df
Out[75]:
one two three flag
a 1.0 1.0 1.0 False
b 2.0 2.0 4.0 False
c 3.0 3.0 9.0 True
d NaN 4.0 NaN False
Columns can be deleted or popped like with a dict:
In [76]: del df["two"]
In [77]: three = df.pop("three")
In [78]: df
Out[78]:
one flag
a 1.0 False
b 2.0 False
c 3.0 True
d NaN False
When inserting a scalar value, it will naturally be propagated to fill the
column:
In [79]: df["foo"] = "bar"
In [80]: df
Out[80]:
one flag foo
a 1.0 False bar
b 2.0 False bar
c 3.0 True bar
d NaN False bar
When inserting a Series that does not have the same index as the DataFrame, it
will be conformed to the DataFrame’s index:
In [81]: df["one_trunc"] = df["one"][:2]
In [82]: df
Out[82]:
one flag foo one_trunc
a 1.0 False bar 1.0
b 2.0 False bar 2.0
c 3.0 True bar NaN
d NaN False bar NaN
You can insert raw ndarrays but their length must match the length of the
DataFrame’s index.
By default, columns get inserted at the end. DataFrame.insert()
inserts at a particular location in the columns:
In [83]: df.insert(1, "bar", df["one"])
In [84]: df
Out[84]:
one bar flag foo one_trunc
a 1.0 1.0 False bar 1.0
b 2.0 2.0 False bar 2.0
c 3.0 3.0 True bar NaN
d NaN NaN False bar NaN
Assigning new columns in method chains#
Inspired by dplyr’s
mutate verb, DataFrame has an assign()
method that allows you to easily create new columns that are potentially
derived from existing columns.
In [85]: iris = pd.read_csv("data/iris.data")
In [86]: iris.head()
Out[86]:
SepalLength SepalWidth PetalLength PetalWidth Name
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
In [87]: iris.assign(sepal_ratio=iris["SepalWidth"] / iris["SepalLength"]).head()
Out[87]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000
In the example above, we inserted a precomputed value. We can also pass in
a function of one argument to be evaluated on the DataFrame being assigned to.
In [88]: iris.assign(sepal_ratio=lambda x: (x["SepalWidth"] / x["SepalLength"])).head()
Out[88]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000
assign() always returns a copy of the data, leaving the original
DataFrame untouched.
Passing a callable, as opposed to an actual value to be inserted, is
useful when you don’t have a reference to the DataFrame at hand. This is
common when using assign() in a chain of operations. For example,
we can limit the DataFrame to just those observations with a Sepal Length
greater than 5, calculate the ratio, and plot:
In [89]: (
....: iris.query("SepalLength > 5")
....: .assign(
....: SepalRatio=lambda x: x.SepalWidth / x.SepalLength,
....: PetalRatio=lambda x: x.PetalWidth / x.PetalLength,
....: )
....: .plot(kind="scatter", x="SepalRatio", y="PetalRatio")
....: )
....:
Out[89]: <AxesSubplot: xlabel='SepalRatio', ylabel='PetalRatio'>
Since a function is passed in, the function is computed on the DataFrame
being assigned to. Importantly, this is the DataFrame that’s been filtered
to those rows with sepal length greater than 5. The filtering happens first,
and then the ratio calculations. This is an example where we didn’t
have a reference to the filtered DataFrame available.
The function signature for assign() is simply **kwargs. The keys
are the column names for the new fields, and the values are either a value
to be inserted (for example, a Series or NumPy array), or a function
of one argument to be called on the DataFrame. A copy of the original
DataFrame is returned, with the new values inserted.
The order of **kwargs is preserved. This allows
for dependent assignment, where an expression later in **kwargs can refer
to a column created earlier in the same assign().
In [90]: dfa = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
In [91]: dfa.assign(C=lambda x: x["A"] + x["B"], D=lambda x: x["A"] + x["C"])
Out[91]:
A B C D
0 1 4 5 6
1 2 5 7 9
2 3 6 9 12
In the second expression, x['C'] will refer to the newly created column,
that’s equal to dfa['A'] + dfa['B'].
Indexing / selection#
The basics of indexing are as follows:
Operation
Syntax
Result
Select column
df[col]
Series
Select row by label
df.loc[label]
Series
Select row by integer location
df.iloc[loc]
Series
Slice rows
df[5:10]
DataFrame
Select rows by boolean vector
df[bool_vec]
DataFrame
Row selection, for example, returns a Series whose index is the columns of the
DataFrame:
In [92]: df.loc["b"]
Out[92]:
one 2.0
bar 2.0
flag False
foo bar
one_trunc 2.0
Name: b, dtype: object
In [93]: df.iloc[2]
Out[93]:
one 3.0
bar 3.0
flag True
foo bar
one_trunc NaN
Name: c, dtype: object
For a more exhaustive treatment of sophisticated label-based indexing and
slicing, see the section on indexing. We will address the
fundamentals of reindexing / conforming to new sets of labels in the
section on reindexing.
Data alignment and arithmetic#
Data alignment between DataFrame objects automatically align on both the
columns and the index (row labels). Again, the resulting object will have the
union of the column and row labels.
In [94]: df = pd.DataFrame(np.random.randn(10, 4), columns=["A", "B", "C", "D"])
In [95]: df2 = pd.DataFrame(np.random.randn(7, 3), columns=["A", "B", "C"])
In [96]: df + df2
Out[96]:
A B C D
0 0.045691 -0.014138 1.380871 NaN
1 -0.955398 -1.501007 0.037181 NaN
2 -0.662690 1.534833 -0.859691 NaN
3 -2.452949 1.237274 -0.133712 NaN
4 1.414490 1.951676 -2.320422 NaN
5 -0.494922 -1.649727 -1.084601 NaN
6 -1.047551 -0.748572 -0.805479 NaN
7 NaN NaN NaN NaN
8 NaN NaN NaN NaN
9 NaN NaN NaN NaN
When doing an operation between DataFrame and Series, the default behavior is
to align the Series index on the DataFrame columns, thus broadcasting
row-wise. For example:
In [97]: df - df.iloc[0]
Out[97]:
A B C D
0 0.000000 0.000000 0.000000 0.000000
1 -1.359261 -0.248717 -0.453372 -1.754659
2 0.253128 0.829678 0.010026 -1.991234
3 -1.311128 0.054325 -1.724913 -1.620544
4 0.573025 1.500742 -0.676070 1.367331
5 -1.741248 0.781993 -1.241620 -2.053136
6 -1.240774 -0.869551 -0.153282 0.000430
7 -0.743894 0.411013 -0.929563 -0.282386
8 -1.194921 1.320690 0.238224 -1.482644
9 2.293786 1.856228 0.773289 -1.446531
For explicit control over the matching and broadcasting behavior, see the
section on flexible binary operations.
Arithmetic operations with scalars operate element-wise:
In [98]: df * 5 + 2
Out[98]:
A B C D
0 3.359299 -0.124862 4.835102 3.381160
1 -3.437003 -1.368449 2.568242 -5.392133
2 4.624938 4.023526 4.885230 -6.575010
3 -3.196342 0.146766 -3.789461 -4.721559
4 6.224426 7.378849 1.454750 10.217815
5 -5.346940 3.785103 -1.373001 -6.884519
6 -2.844569 -4.472618 4.068691 3.383309
7 -0.360173 1.930201 0.187285 1.969232
8 -2.615303 6.478587 6.026220 -4.032059
9 14.828230 9.156280 8.701544 -3.851494
In [99]: 1 / df
Out[99]:
A B C D
0 3.678365 -2.353094 1.763605 3.620145
1 -0.919624 -1.484363 8.799067 -0.676395
2 1.904807 2.470934 1.732964 -0.583090
3 -0.962215 -2.697986 -0.863638 -0.743875
4 1.183593 0.929567 -9.170108 0.608434
5 -0.680555 2.800959 -1.482360 -0.562777
6 -1.032084 -0.772485 2.416988 3.614523
7 -2.118489 -71.634509 -2.758294 -162.507295
8 -1.083352 1.116424 1.241860 -0.828904
9 0.389765 0.698687 0.746097 -0.854483
In [100]: df ** 4
Out[100]:
A B C D
0 0.005462 3.261689e-02 0.103370 5.822320e-03
1 1.398165 2.059869e-01 0.000167 4.777482e+00
2 0.075962 2.682596e-02 0.110877 8.650845e+00
3 1.166571 1.887302e-02 1.797515 3.265879e+00
4 0.509555 1.339298e+00 0.000141 7.297019e+00
5 4.661717 1.624699e-02 0.207103 9.969092e+00
6 0.881334 2.808277e+00 0.029302 5.858632e-03
7 0.049647 3.797614e-08 0.017276 1.433866e-09
8 0.725974 6.437005e-01 0.420446 2.118275e+00
9 43.329821 4.196326e+00 3.227153 1.875802e+00
Boolean operators operate element-wise as well:
In [101]: df1 = pd.DataFrame({"a": [1, 0, 1], "b": [0, 1, 1]}, dtype=bool)
In [102]: df2 = pd.DataFrame({"a": [0, 1, 1], "b": [1, 1, 0]}, dtype=bool)
In [103]: df1 & df2
Out[103]:
a b
0 False False
1 False True
2 True False
In [104]: df1 | df2
Out[104]:
a b
0 True True
1 True True
2 True True
In [105]: df1 ^ df2
Out[105]:
a b
0 True True
1 True False
2 False True
In [106]: -df1
Out[106]:
a b
0 False True
1 True False
2 False False
Transposing#
To transpose, access the T attribute or DataFrame.transpose(),
similar to an ndarray:
# only show the first 5 rows
In [107]: df[:5].T
Out[107]:
0 1 2 3 4
A 0.271860 -1.087401 0.524988 -1.039268 0.844885
B -0.424972 -0.673690 0.404705 -0.370647 1.075770
C 0.567020 0.113648 0.577046 -1.157892 -0.109050
D 0.276232 -1.478427 -1.715002 -1.344312 1.643563
DataFrame interoperability with NumPy functions#
Most NumPy functions can be called directly on Series and DataFrame.
In [108]: np.exp(df)
Out[108]:
A B C D
0 1.312403 0.653788 1.763006 1.318154
1 0.337092 0.509824 1.120358 0.227996
2 1.690438 1.498861 1.780770 0.179963
3 0.353713 0.690288 0.314148 0.260719
4 2.327710 2.932249 0.896686 5.173571
5 0.230066 1.429065 0.509360 0.169161
6 0.379495 0.274028 1.512461 1.318720
7 0.623732 0.986137 0.695904 0.993865
8 0.397301 2.449092 2.237242 0.299269
9 13.009059 4.183951 3.820223 0.310274
In [109]: np.asarray(df)
Out[109]:
array([[ 0.2719, -0.425 , 0.567 , 0.2762],
[-1.0874, -0.6737, 0.1136, -1.4784],
[ 0.525 , 0.4047, 0.577 , -1.715 ],
[-1.0393, -0.3706, -1.1579, -1.3443],
[ 0.8449, 1.0758, -0.109 , 1.6436],
[-1.4694, 0.357 , -0.6746, -1.7769],
[-0.9689, -1.2945, 0.4137, 0.2767],
[-0.472 , -0.014 , -0.3625, -0.0062],
[-0.9231, 0.8957, 0.8052, -1.2064],
[ 2.5656, 1.4313, 1.3403, -1.1703]])
DataFrame is not intended to be a drop-in replacement for ndarray as its
indexing semantics and data model are quite different in places from an n-dimensional
array.
Series implements __array_ufunc__, which allows it to work with NumPy’s
universal functions.
The ufunc is applied to the underlying array in a Series.
In [110]: ser = pd.Series([1, 2, 3, 4])
In [111]: np.exp(ser)
Out[111]:
0 2.718282
1 7.389056
2 20.085537
3 54.598150
dtype: float64
Changed in version 0.25.0: When multiple Series are passed to a ufunc, they are aligned before
performing the operation.
Like other parts of the library, pandas will automatically align labeled inputs
as part of a ufunc with multiple inputs. For example, using numpy.remainder()
on two Series with differently ordered labels will align before the operation.
In [112]: ser1 = pd.Series([1, 2, 3], index=["a", "b", "c"])
In [113]: ser2 = pd.Series([1, 3, 5], index=["b", "a", "c"])
In [114]: ser1
Out[114]:
a 1
b 2
c 3
dtype: int64
In [115]: ser2
Out[115]:
b 1
a 3
c 5
dtype: int64
In [116]: np.remainder(ser1, ser2)
Out[116]:
a 1
b 0
c 3
dtype: int64
As usual, the union of the two indices is taken, and non-overlapping values are filled
with missing values.
In [117]: ser3 = pd.Series([2, 4, 6], index=["b", "c", "d"])
In [118]: ser3
Out[118]:
b 2
c 4
d 6
dtype: int64
In [119]: np.remainder(ser1, ser3)
Out[119]:
a NaN
b 0.0
c 3.0
d NaN
dtype: float64
When a binary ufunc is applied to a Series and Index, the Series
implementation takes precedence and a Series is returned.
In [120]: ser = pd.Series([1, 2, 3])
In [121]: idx = pd.Index([4, 5, 6])
In [122]: np.maximum(ser, idx)
Out[122]:
0 4
1 5
2 6
dtype: int64
NumPy ufuncs are safe to apply to Series backed by non-ndarray arrays,
for example arrays.SparseArray (see Sparse calculation). If possible,
the ufunc is applied without converting the underlying data to an ndarray.
Console display#
A very large DataFrame will be truncated to display them in the console.
You can also get a summary using info().
(The baseball dataset is from the plyr R package):
In [123]: baseball = pd.read_csv("data/baseball.csv")
In [124]: print(baseball)
id player year stint team lg ... so ibb hbp sh sf gidp
0 88641 womacto01 2006 2 CHN NL ... 4.0 0.0 0.0 3.0 0.0 0.0
1 88643 schilcu01 2006 1 BOS AL ... 1.0 0.0 0.0 0.0 0.0 0.0
.. ... ... ... ... ... .. ... ... ... ... ... ... ...
98 89533 aloumo01 2007 1 NYN NL ... 30.0 5.0 2.0 0.0 3.0 13.0
99 89534 alomasa02 2007 1 NYN NL ... 3.0 0.0 0.0 0.0 0.0 0.0
[100 rows x 23 columns]
In [125]: baseball.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 100 entries, 0 to 99
Data columns (total 23 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 100 non-null int64
1 player 100 non-null object
2 year 100 non-null int64
3 stint 100 non-null int64
4 team 100 non-null object
5 lg 100 non-null object
6 g 100 non-null int64
7 ab 100 non-null int64
8 r 100 non-null int64
9 h 100 non-null int64
10 X2b 100 non-null int64
11 X3b 100 non-null int64
12 hr 100 non-null int64
13 rbi 100 non-null float64
14 sb 100 non-null float64
15 cs 100 non-null float64
16 bb 100 non-null int64
17 so 100 non-null float64
18 ibb 100 non-null float64
19 hbp 100 non-null float64
20 sh 100 non-null float64
21 sf 100 non-null float64
22 gidp 100 non-null float64
dtypes: float64(9), int64(11), object(3)
memory usage: 18.1+ KB
However, using DataFrame.to_string() will return a string representation of the
DataFrame in tabular form, though it won’t always fit the console width:
In [126]: print(baseball.iloc[-20:, :12].to_string())
id player year stint team lg g ab r h X2b X3b
80 89474 finlest01 2007 1 COL NL 43 94 9 17 3 0
81 89480 embreal01 2007 1 OAK AL 4 0 0 0 0 0
82 89481 edmonji01 2007 1 SLN NL 117 365 39 92 15 2
83 89482 easleda01 2007 1 NYN NL 76 193 24 54 6 0
84 89489 delgaca01 2007 1 NYN NL 139 538 71 139 30 0
85 89493 cormirh01 2007 1 CIN NL 6 0 0 0 0 0
86 89494 coninje01 2007 2 NYN NL 21 41 2 8 2 0
87 89495 coninje01 2007 1 CIN NL 80 215 23 57 11 1
88 89497 clemero02 2007 1 NYA AL 2 2 0 1 0 0
89 89498 claytro01 2007 2 BOS AL 8 6 1 0 0 0
90 89499 claytro01 2007 1 TOR AL 69 189 23 48 14 0
91 89501 cirilje01 2007 2 ARI NL 28 40 6 8 4 0
92 89502 cirilje01 2007 1 MIN AL 50 153 18 40 9 2
93 89521 bondsba01 2007 1 SFN NL 126 340 75 94 14 0
94 89523 biggicr01 2007 1 HOU NL 141 517 68 130 31 3
95 89525 benitar01 2007 2 FLO NL 34 0 0 0 0 0
96 89526 benitar01 2007 1 SFN NL 19 0 0 0 0 0
97 89530 ausmubr01 2007 1 HOU NL 117 349 38 82 16 3
98 89533 aloumo01 2007 1 NYN NL 87 328 51 112 19 1
99 89534 alomasa02 2007 1 NYN NL 8 22 1 3 1 0
Wide DataFrames will be printed across multiple rows by
default:
In [127]: pd.DataFrame(np.random.randn(3, 12))
Out[127]:
0 1 2 ... 9 10 11
0 -1.226825 0.769804 -1.281247 ... -1.110336 -0.619976 0.149748
1 -0.732339 0.687738 0.176444 ... 1.462696 -1.743161 -0.826591
2 -0.345352 1.314232 0.690579 ... 0.896171 -0.487602 -0.082240
[3 rows x 12 columns]
You can change how much to print on a single row by setting the display.width
option:
In [128]: pd.set_option("display.width", 40) # default is 80
In [129]: pd.DataFrame(np.random.randn(3, 12))
Out[129]:
0 1 2 ... 9 10 11
0 -2.182937 0.380396 0.084844 ... -0.023688 2.410179 1.450520
1 0.206053 -0.251905 -2.213588 ... -0.025747 -0.988387 0.094055
2 1.262731 1.289997 0.082423 ... -0.281461 0.030711 0.109121
[3 rows x 12 columns]
You can adjust the max width of the individual columns by setting display.max_colwidth
In [130]: datafile = {
.....: "filename": ["filename_01", "filename_02"],
.....: "path": [
.....: "media/user_name/storage/folder_01/filename_01",
.....: "media/user_name/storage/folder_02/filename_02",
.....: ],
.....: }
.....:
In [131]: pd.set_option("display.max_colwidth", 30)
In [132]: pd.DataFrame(datafile)
Out[132]:
filename path
0 filename_01 media/user_name/storage/fo...
1 filename_02 media/user_name/storage/fo...
In [133]: pd.set_option("display.max_colwidth", 100)
In [134]: pd.DataFrame(datafile)
Out[134]:
filename path
0 filename_01 media/user_name/storage/folder_01/filename_01
1 filename_02 media/user_name/storage/folder_02/filename_02
You can also disable this feature via the expand_frame_repr option.
This will print the table in one block.
DataFrame column attribute access and IPython completion#
If a DataFrame column label is a valid Python variable name, the column can be
accessed like an attribute:
In [135]: df = pd.DataFrame({"foo1": np.random.randn(5), "foo2": np.random.randn(5)})
In [136]: df
Out[136]:
foo1 foo2
0 1.126203 0.781836
1 -0.977349 -1.071357
2 1.474071 0.441153
3 -0.064034 2.353925
4 -1.282782 0.583787
In [137]: df.foo1
Out[137]:
0 1.126203
1 -0.977349
2 1.474071
3 -0.064034
4 -1.282782
Name: foo1, dtype: float64
The columns are also connected to the IPython
completion mechanism so they can be tab-completed:
In [5]: df.foo<TAB> # noqa: E225, E999
df.foo1 df.foo2
| 541
| 1,240
|
A question about how to make calculus after making a group by pandas
I'm working with a Data Frame with categorical values where my input DataFrame is below:
df
Age Gender Smoke
18 Female Yes
24 Female No
18 Female Yes
34 Male Yes
34 Male No
I want to groupby my DataFrame based on columns "Age" and "Gender" where "Occurrence" column calculates the frequency of each selection and then, I want to create two other columns "Smoke Yes" that calculates number of smoking people based on the selection and "Smoke No" that calculates number of non smoking people
Age Gender Occurence Smoke Yes Smoke No
18 Woman 2 0.50 0.50
24 Woman 1 0 1
34 Man 2 0.5 0.5
In order to do that, I used the following code
#Group and sort
df1=df.groupby(['Age', 'Gender']).size().reset_index(name='Frequency').sort_values('Frequency', ascending=False)
#Delete index
df1.reset_index(drop=True,inplace=True)
However the df['Smoke'] column is disappeared so I can't continue my calculus. Does any one have an idea and what can I do to obtain like the output DataFrame?
|
63,483,800
|
Compare two data-frames and removes rows based on multiple conditions
|
<p>If I have two date-frames in the following format. df-a:</p>
<pre><code> ID Start_Date End_Date
1 cd2 2020-06-01 2020-06-09
2 cd2 2020-06-24 2020-07-21
3 cd56 2020-06-10 2020-07-03
4 cd915 2020-04-28 2020-07-21
5 cd103 2020-04-13 2020-04-24
</code></pre>
<p>and df-b:</p>
<pre><code> ID Date
1 cd2 2020-05-12
2 cd2 2020-04-12
3 cd2 2020-06-29
4 cd15 2020-04-28
5 cd193 2020-04-13
</code></pre>
<p>I need to discard all rows for all IDs in df-b where they fall in various date ranges for the same ID in df-a. I.e ANSWER</p>
<pre><code> ID Date
1 cd2 2020-05-12
2 cd2 2020-04-12
4 cd15 2020-04-28
5 cd193 2020-04-13
</code></pre>
<p>as ID cd2 is the only ID that matches in df-a with one date that fall within cd2's date ranges from df-a.</p>
<p>Sorry for the long-winded question. First time posting.</p>
| 63,494,548
| 2020-08-19T09:18:22.607000
| 1
| null | 0
| 77
|
python|pandas
|
<br>
I tried my best to understand your question, however I am confused by your sample answer. <br>
None of the IDs in df-b should be removed. Even for row 3 of df-b, the date (2020-06-10) does not fall in the range of any start/end dates for ID cd2 in df-a.
<br> <br>
I did set up a similar example to what you provided with df-a being:
<pre class="lang-py prettyprint-override"><code> ID Start_Date End_Date
0 cd2 2020-06-01 2020-06-11
1 cd2 2020-06-24 2020-07-21
2 cd56 2020-06-10 2020-07-03
3 cd915 2020-04-28 2020-07-21
4 cd103 2020-04-13 2020-04-24
</code></pre>
<p>and df-b being:</p>
<pre class="lang-py prettyprint-override"><code> ID Date
0 cd2 2020-05-12
1 cd2 2020-04-12
2 cd2 2020-06-10
3 cd15 2020-04-28
4 cd193 2020-04-13
</code></pre>
<p>With this example, row 2 (0-based) of df-b should be removed since 2020-06-10 falls between 2020-06-01 and 2020-06-11 in row 0 of df-a.
<br><br>
<strong>Here's my code for doing the row deletions</strong></p>
<pre class="lang-py prettyprint-override"><code>df_c = df_b.copy()
for i in range(df_c.shape[0]):
currentID = df_c.ID[i]
currentDate = df_c.Date[i]
df_a_entriesForCurrentID = df_a.loc[df_a.ID == currentID]
for j in range(df_a_entriesForCurrentID.shape[0]):
startDate = df_a_entriesForCurrentID.iloc[j,:].Start_Date
endDate = df_a_entriesForCurrentID.iloc[j,:].End_Date
if (startDate <= currentDate <= endDate):
df_c = df_c.drop(i)
print('dropped')
</code></pre>
<p>where df_c is the output DataFrame. <br> <br></p>
<p>After running this, df_c should look like:</p>
<pre class="lang-py prettyprint-override"><code> ID Date
0 cd2 2020-05-12
1 cd2 2020-04-12
3 cd15 2020-04-28
4 cd193 2020-04-13
</code></pre>
| 2020-08-19T20:28:19.703000
| 1
|
https://pandas.pydata.org/docs/dev/user_guide/merging.html
|
I tried my best to understand your question, however I am confused by your sample answer.
None of the IDs in df-b should be removed. Even for row 3 of df-b, the date (2020-06-10) does not fall in the range of any start/end dates for ID cd2 in df-a.
I did set up a similar example to what you provided with df-a being:
ID Start_Date End_Date
0 cd2 2020-06-01 2020-06-11
1 cd2 2020-06-24 2020-07-21
2 cd56 2020-06-10 2020-07-03
3 cd915 2020-04-28 2020-07-21
4 cd103 2020-04-13 2020-04-24
and df-b being:
ID Date
0 cd2 2020-05-12
1 cd2 2020-04-12
2 cd2 2020-06-10
3 cd15 2020-04-28
4 cd193 2020-04-13
With this example, row 2 (0-based) of df-b should be removed since 2020-06-10 falls between 2020-06-01 and 2020-06-11 in row 0 of df-a.
Here's my code for doing the row deletions
df_c = df_b.copy()
for i in range(df_c.shape[0]):
currentID = df_c.ID[i]
currentDate = df_c.Date[i]
df_a_entriesForCurrentID = df_a.loc[df_a.ID == currentID]
for j in range(df_a_entriesForCurrentID.shape[0]):
startDate = df_a_entriesForCurrentID.iloc[j,:].Start_Date
endDate = df_a_entriesForCurrentID.iloc[j,:].End_Date
if (startDate <= currentDate <= endDate):
df_c = df_c.drop(i)
print('dropped')
where df_c is the output DataFrame.
After running this, df_c should look like:
ID Date
0 cd2 2020-05-12
1 cd2 2020-04-12
3 cd15 2020-04-28
4 cd193 2020-04-13
| 0
| 1,542
|
Compare two data-frames and removes rows based on multiple conditions
If I have two date-frames in the following format. df-a:
ID Start_Date End_Date
1 cd2 2020-06-01 2020-06-09
2 cd2 2020-06-24 2020-07-21
3 cd56 2020-06-10 2020-07-03
4 cd915 2020-04-28 2020-07-21
5 cd103 2020-04-13 2020-04-24
and df-b:
ID Date
1 cd2 2020-05-12
2 cd2 2020-04-12
3 cd2 2020-06-29
4 cd15 2020-04-28
5 cd193 2020-04-13
I need to discard all rows for all IDs in df-b where they fall in various date ranges for the same ID in df-a. I.e ANSWER
ID Date
1 cd2 2020-05-12
2 cd2 2020-04-12
4 cd15 2020-04-28
5 cd193 2020-04-13
as ID cd2 is the only ID that matches in df-a with one date that fall within cd2's date ranges from df-a.
Sorry for the long-winded question. First time posting.
|
68,254,993
|
How to fill in multiple boolean criteria in Pandas
|
<p><code>I found an alternate answer here</code> <a href="https://stackoverflow.com/questions/49586471/add-new-column-to-python-pandas-dataframe-based-on-multiple-conditions">link</a></p>
<p>My dataframe currently looks like this</p>
<pre><code> date new_cases new_deaths new_tests year month day weekday
0 2019-12-31 0.0 0.0 NaN 2019 12 31 1
1 2020-01-01 0.0 0.0 NaN 2020 1 1 2
2 2020-01-02 0.0 0.0 NaN 2020 1 2 3
</code></pre>
<p>I want to pass an code which will average out the 'new_cases' for weekday and weekend
my code currently look like this but I can only pass 1 condition which is ' == 6 '. I want to pass multiple criteria, for example, == (4,5,6)</p>
<pre><code>covid_df[covid_df.weekday == 6].new_cases.mean()
</code></pre>
<p>Any clue?</p>
| 68,255,095
| 2021-07-05T11:00:14.583000
| 2
| 0
| 1
| 78
|
python|pandas
|
<p>I think you're looking for <code>isin</code>:</p>
<pre><code># you can use `loc` to access `new_cases` also
days = [4, 5, 6]
df.loc[df.weekday.isin(days), "new_cases"].mean()
</code></pre>
<p>which selects the rows whose <code>weekday</code> is in the <code>days</code> list; and then selects <code>new_cases</code> column.</p>
| 2021-07-05T11:08:14.067000
| 1
|
https://pandas.pydata.org/docs/user_guide/missing_data.html
|
Working with missing data#
Working with missing data#
In this section, we will discuss missing (also referred to as NA) values in
pandas.
Note
The choice of using NaN internally to denote missing data was largely
for simplicity and performance reasons.
Starting from pandas 1.0, some optional data types start experimenting
with a native NA scalar using a mask-based approach. See
here for more.
See the cookbook for some advanced strategies.
Values considered “missing”#
I think you're looking for isin:
# you can use `loc` to access `new_cases` also
days = [4, 5, 6]
df.loc[df.weekday.isin(days), "new_cases"].mean()
which selects the rows whose weekday is in the days list; and then selects new_cases column.
As data comes in many shapes and forms, pandas aims to be flexible with regard
to handling missing data. While NaN is the default missing value marker for
reasons of computational speed and convenience, we need to be able to easily
detect this value with data of different types: floating point, integer,
boolean, and general object. In many cases, however, the Python None will
arise and we wish to also consider that “missing” or “not available” or “NA”.
Note
If you want to consider inf and -inf to be “NA” in computations,
you can set pandas.options.mode.use_inf_as_na = True.
In [1]: df = pd.DataFrame(
...: np.random.randn(5, 3),
...: index=["a", "c", "e", "f", "h"],
...: columns=["one", "two", "three"],
...: )
...:
In [2]: df["four"] = "bar"
In [3]: df["five"] = df["one"] > 0
In [4]: df
Out[4]:
one two three four five
a 0.469112 -0.282863 -1.509059 bar True
c -1.135632 1.212112 -0.173215 bar False
e 0.119209 -1.044236 -0.861849 bar True
f -2.104569 -0.494929 1.071804 bar False
h 0.721555 -0.706771 -1.039575 bar True
In [5]: df2 = df.reindex(["a", "b", "c", "d", "e", "f", "g", "h"])
In [6]: df2
Out[6]:
one two three four five
a 0.469112 -0.282863 -1.509059 bar True
b NaN NaN NaN NaN NaN
c -1.135632 1.212112 -0.173215 bar False
d NaN NaN NaN NaN NaN
e 0.119209 -1.044236 -0.861849 bar True
f -2.104569 -0.494929 1.071804 bar False
g NaN NaN NaN NaN NaN
h 0.721555 -0.706771 -1.039575 bar True
To make detecting missing values easier (and across different array dtypes),
pandas provides the isna() and
notna() functions, which are also methods on
Series and DataFrame objects:
In [7]: df2["one"]
Out[7]:
a 0.469112
b NaN
c -1.135632
d NaN
e 0.119209
f -2.104569
g NaN
h 0.721555
Name: one, dtype: float64
In [8]: pd.isna(df2["one"])
Out[8]:
a False
b True
c False
d True
e False
f False
g True
h False
Name: one, dtype: bool
In [9]: df2["four"].notna()
Out[9]:
a True
b False
c True
d False
e True
f True
g False
h True
Name: four, dtype: bool
In [10]: df2.isna()
Out[10]:
one two three four five
a False False False False False
b True True True True True
c False False False False False
d True True True True True
e False False False False False
f False False False False False
g True True True True True
h False False False False False
Warning
One has to be mindful that in Python (and NumPy), the nan's don’t compare equal, but None's do.
Note that pandas/NumPy uses the fact that np.nan != np.nan, and treats None like np.nan.
In [11]: None == None # noqa: E711
Out[11]: True
In [12]: np.nan == np.nan
Out[12]: False
So as compared to above, a scalar equality comparison versus a None/np.nan doesn’t provide useful information.
In [13]: df2["one"] == np.nan
Out[13]:
a False
b False
c False
d False
e False
f False
g False
h False
Name: one, dtype: bool
Integer dtypes and missing data#
Because NaN is a float, a column of integers with even one missing values
is cast to floating-point dtype (see Support for integer NA for more). pandas
provides a nullable integer array, which can be used by explicitly requesting
the dtype:
In [14]: pd.Series([1, 2, np.nan, 4], dtype=pd.Int64Dtype())
Out[14]:
0 1
1 2
2 <NA>
3 4
dtype: Int64
Alternatively, the string alias dtype='Int64' (note the capital "I") can be
used.
See Nullable integer data type for more.
Datetimes#
For datetime64[ns] types, NaT represents missing values. This is a pseudo-native
sentinel value that can be represented by NumPy in a singular dtype (datetime64[ns]).
pandas objects provide compatibility between NaT and NaN.
In [15]: df2 = df.copy()
In [16]: df2["timestamp"] = pd.Timestamp("20120101")
In [17]: df2
Out[17]:
one two three four five timestamp
a 0.469112 -0.282863 -1.509059 bar True 2012-01-01
c -1.135632 1.212112 -0.173215 bar False 2012-01-01
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h 0.721555 -0.706771 -1.039575 bar True 2012-01-01
In [18]: df2.loc[["a", "c", "h"], ["one", "timestamp"]] = np.nan
In [19]: df2
Out[19]:
one two three four five timestamp
a NaN -0.282863 -1.509059 bar True NaT
c NaN 1.212112 -0.173215 bar False NaT
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h NaN -0.706771 -1.039575 bar True NaT
In [20]: df2.dtypes.value_counts()
Out[20]:
float64 3
object 1
bool 1
datetime64[ns] 1
dtype: int64
Inserting missing data#
You can insert missing values by simply assigning to containers. The
actual missing value used will be chosen based on the dtype.
For example, numeric containers will always use NaN regardless of
the missing value type chosen:
In [21]: s = pd.Series([1, 2, 3])
In [22]: s.loc[0] = None
In [23]: s
Out[23]:
0 NaN
1 2.0
2 3.0
dtype: float64
Likewise, datetime containers will always use NaT.
For object containers, pandas will use the value given:
In [24]: s = pd.Series(["a", "b", "c"])
In [25]: s.loc[0] = None
In [26]: s.loc[1] = np.nan
In [27]: s
Out[27]:
0 None
1 NaN
2 c
dtype: object
Calculations with missing data#
Missing values propagate naturally through arithmetic operations between pandas
objects.
In [28]: a
Out[28]:
one two
a NaN -0.282863
c NaN 1.212112
e 0.119209 -1.044236
f -2.104569 -0.494929
h -2.104569 -0.706771
In [29]: b
Out[29]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [30]: a + b
Out[30]:
one three two
a NaN NaN -0.565727
c NaN NaN 2.424224
e 0.238417 NaN -2.088472
f -4.209138 NaN -0.989859
h NaN NaN -1.413542
The descriptive statistics and computational methods discussed in the
data structure overview (and listed here and here) are all written to
account for missing data. For example:
When summing data, NA (missing) values will be treated as zero.
If the data are all NA, the result will be 0.
Cumulative methods like cumsum() and cumprod() ignore NA values by default, but preserve them in the resulting arrays. To override this behaviour and include NA values, use skipna=False.
In [31]: df
Out[31]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [32]: df["one"].sum()
Out[32]: -1.9853605075978744
In [33]: df.mean(1)
Out[33]:
a -0.895961
c 0.519449
e -0.595625
f -0.509232
h -0.873173
dtype: float64
In [34]: df.cumsum()
Out[34]:
one two three
a NaN -0.282863 -1.509059
c NaN 0.929249 -1.682273
e 0.119209 -0.114987 -2.544122
f -1.985361 -0.609917 -1.472318
h NaN -1.316688 -2.511893
In [35]: df.cumsum(skipna=False)
Out[35]:
one two three
a NaN -0.282863 -1.509059
c NaN 0.929249 -1.682273
e NaN -0.114987 -2.544122
f NaN -0.609917 -1.472318
h NaN -1.316688 -2.511893
Sum/prod of empties/nans#
Warning
This behavior is now standard as of v0.22.0 and is consistent with the default in numpy; previously sum/prod of all-NA or empty Series/DataFrames would return NaN.
See v0.22.0 whatsnew for more.
The sum of an empty or all-NA Series or column of a DataFrame is 0.
In [36]: pd.Series([np.nan]).sum()
Out[36]: 0.0
In [37]: pd.Series([], dtype="float64").sum()
Out[37]: 0.0
The product of an empty or all-NA Series or column of a DataFrame is 1.
In [38]: pd.Series([np.nan]).prod()
Out[38]: 1.0
In [39]: pd.Series([], dtype="float64").prod()
Out[39]: 1.0
NA values in GroupBy#
NA groups in GroupBy are automatically excluded. This behavior is consistent
with R, for example:
In [40]: df
Out[40]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [41]: df.groupby("one").mean()
Out[41]:
two three
one
-2.104569 -0.494929 1.071804
0.119209 -1.044236 -0.861849
See the groupby section here for more information.
Cleaning / filling missing data#
pandas objects are equipped with various data manipulation methods for dealing
with missing data.
Filling missing values: fillna#
fillna() can “fill in” NA values with non-NA data in a couple
of ways, which we illustrate:
Replace NA with a scalar value
In [42]: df2
Out[42]:
one two three four five timestamp
a NaN -0.282863 -1.509059 bar True NaT
c NaN 1.212112 -0.173215 bar False NaT
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h NaN -0.706771 -1.039575 bar True NaT
In [43]: df2.fillna(0)
Out[43]:
one two three four five timestamp
a 0.000000 -0.282863 -1.509059 bar True 0
c 0.000000 1.212112 -0.173215 bar False 0
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01 00:00:00
f -2.104569 -0.494929 1.071804 bar False 2012-01-01 00:00:00
h 0.000000 -0.706771 -1.039575 bar True 0
In [44]: df2["one"].fillna("missing")
Out[44]:
a missing
c missing
e 0.119209
f -2.104569
h missing
Name: one, dtype: object
Fill gaps forward or backward
Using the same filling arguments as reindexing, we
can propagate non-NA values forward or backward:
In [45]: df
Out[45]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [46]: df.fillna(method="pad")
Out[46]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h -2.104569 -0.706771 -1.039575
Limit the amount of filling
If we only want consecutive gaps filled up to a certain number of data points,
we can use the limit keyword:
In [47]: df
Out[47]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN NaN NaN
f NaN NaN NaN
h NaN -0.706771 -1.039575
In [48]: df.fillna(method="pad", limit=1)
Out[48]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN 1.212112 -0.173215
f NaN NaN NaN
h NaN -0.706771 -1.039575
To remind you, these are the available filling methods:
Method
Action
pad / ffill
Fill values forward
bfill / backfill
Fill values backward
With time series data, using pad/ffill is extremely common so that the “last
known value” is available at every time point.
ffill() is equivalent to fillna(method='ffill')
and bfill() is equivalent to fillna(method='bfill')
Filling with a PandasObject#
You can also fillna using a dict or Series that is alignable. The labels of the dict or index of the Series
must match the columns of the frame you wish to fill. The
use case of this is to fill a DataFrame with the mean of that column.
In [49]: dff = pd.DataFrame(np.random.randn(10, 3), columns=list("ABC"))
In [50]: dff.iloc[3:5, 0] = np.nan
In [51]: dff.iloc[4:6, 1] = np.nan
In [52]: dff.iloc[5:8, 2] = np.nan
In [53]: dff
Out[53]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 NaN 0.577046 -1.715002
4 NaN NaN -1.157892
5 -1.344312 NaN NaN
6 -0.109050 1.643563 NaN
7 0.357021 -0.674600 NaN
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
In [54]: dff.fillna(dff.mean())
Out[54]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 -0.140857 0.577046 -1.715002
4 -0.140857 -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
In [55]: dff.fillna(dff.mean()["B":"C"])
Out[55]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 NaN 0.577046 -1.715002
4 NaN -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
Same result as above, but is aligning the ‘fill’ value which is
a Series in this case.
In [56]: dff.where(pd.notna(dff), dff.mean(), axis="columns")
Out[56]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 -0.140857 0.577046 -1.715002
4 -0.140857 -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
Dropping axis labels with missing data: dropna#
You may wish to simply exclude labels from a data set which refer to missing
data. To do this, use dropna():
In [57]: df
Out[57]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN 0.000000 0.000000
f NaN 0.000000 0.000000
h NaN -0.706771 -1.039575
In [58]: df.dropna(axis=0)
Out[58]:
Empty DataFrame
Columns: [one, two, three]
Index: []
In [59]: df.dropna(axis=1)
Out[59]:
two three
a -0.282863 -1.509059
c 1.212112 -0.173215
e 0.000000 0.000000
f 0.000000 0.000000
h -0.706771 -1.039575
In [60]: df["one"].dropna()
Out[60]: Series([], Name: one, dtype: float64)
An equivalent dropna() is available for Series.
DataFrame.dropna has considerably more options than Series.dropna, which can be
examined in the API.
Interpolation#
Both Series and DataFrame objects have interpolate()
that, by default, performs linear interpolation at missing data points.
In [61]: ts
Out[61]:
2000-01-31 0.469112
2000-02-29 NaN
2000-03-31 NaN
2000-04-28 NaN
2000-05-31 NaN
...
2007-12-31 -6.950267
2008-01-31 -7.904475
2008-02-29 -6.441779
2008-03-31 -8.184940
2008-04-30 -9.011531
Freq: BM, Length: 100, dtype: float64
In [62]: ts.count()
Out[62]: 66
In [63]: ts.plot()
Out[63]: <AxesSubplot: >
In [64]: ts.interpolate()
Out[64]:
2000-01-31 0.469112
2000-02-29 0.434469
2000-03-31 0.399826
2000-04-28 0.365184
2000-05-31 0.330541
...
2007-12-31 -6.950267
2008-01-31 -7.904475
2008-02-29 -6.441779
2008-03-31 -8.184940
2008-04-30 -9.011531
Freq: BM, Length: 100, dtype: float64
In [65]: ts.interpolate().count()
Out[65]: 100
In [66]: ts.interpolate().plot()
Out[66]: <AxesSubplot: >
Index aware interpolation is available via the method keyword:
In [67]: ts2
Out[67]:
2000-01-31 0.469112
2000-02-29 NaN
2002-07-31 -5.785037
2005-01-31 NaN
2008-04-30 -9.011531
dtype: float64
In [68]: ts2.interpolate()
Out[68]:
2000-01-31 0.469112
2000-02-29 -2.657962
2002-07-31 -5.785037
2005-01-31 -7.398284
2008-04-30 -9.011531
dtype: float64
In [69]: ts2.interpolate(method="time")
Out[69]:
2000-01-31 0.469112
2000-02-29 0.270241
2002-07-31 -5.785037
2005-01-31 -7.190866
2008-04-30 -9.011531
dtype: float64
For a floating-point index, use method='values':
In [70]: ser
Out[70]:
0.0 0.0
1.0 NaN
10.0 10.0
dtype: float64
In [71]: ser.interpolate()
Out[71]:
0.0 0.0
1.0 5.0
10.0 10.0
dtype: float64
In [72]: ser.interpolate(method="values")
Out[72]:
0.0 0.0
1.0 1.0
10.0 10.0
dtype: float64
You can also interpolate with a DataFrame:
In [73]: df = pd.DataFrame(
....: {
....: "A": [1, 2.1, np.nan, 4.7, 5.6, 6.8],
....: "B": [0.25, np.nan, np.nan, 4, 12.2, 14.4],
....: }
....: )
....:
In [74]: df
Out[74]:
A B
0 1.0 0.25
1 2.1 NaN
2 NaN NaN
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
In [75]: df.interpolate()
Out[75]:
A B
0 1.0 0.25
1 2.1 1.50
2 3.4 2.75
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
The method argument gives access to fancier interpolation methods.
If you have scipy installed, you can pass the name of a 1-d interpolation routine to method.
You’ll want to consult the full scipy interpolation documentation and reference guide for details.
The appropriate interpolation method will depend on the type of data you are working with.
If you are dealing with a time series that is growing at an increasing rate,
method='quadratic' may be appropriate.
If you have values approximating a cumulative distribution function,
then method='pchip' should work well.
To fill missing values with goal of smooth plotting, consider method='akima'.
Warning
These methods require scipy.
In [76]: df.interpolate(method="barycentric")
Out[76]:
A B
0 1.00 0.250
1 2.10 -7.660
2 3.53 -4.515
3 4.70 4.000
4 5.60 12.200
5 6.80 14.400
In [77]: df.interpolate(method="pchip")
Out[77]:
A B
0 1.00000 0.250000
1 2.10000 0.672808
2 3.43454 1.928950
3 4.70000 4.000000
4 5.60000 12.200000
5 6.80000 14.400000
In [78]: df.interpolate(method="akima")
Out[78]:
A B
0 1.000000 0.250000
1 2.100000 -0.873316
2 3.406667 0.320034
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
When interpolating via a polynomial or spline approximation, you must also specify
the degree or order of the approximation:
In [79]: df.interpolate(method="spline", order=2)
Out[79]:
A B
0 1.000000 0.250000
1 2.100000 -0.428598
2 3.404545 1.206900
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
In [80]: df.interpolate(method="polynomial", order=2)
Out[80]:
A B
0 1.000000 0.250000
1 2.100000 -2.703846
2 3.451351 -1.453846
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
Compare several methods:
In [81]: np.random.seed(2)
In [82]: ser = pd.Series(np.arange(1, 10.1, 0.25) ** 2 + np.random.randn(37))
In [83]: missing = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29])
In [84]: ser[missing] = np.nan
In [85]: methods = ["linear", "quadratic", "cubic"]
In [86]: df = pd.DataFrame({m: ser.interpolate(method=m) for m in methods})
In [87]: df.plot()
Out[87]: <AxesSubplot: >
Another use case is interpolation at new values.
Suppose you have 100 observations from some distribution. And let’s suppose
that you’re particularly interested in what’s happening around the middle.
You can mix pandas’ reindex and interpolate methods to interpolate
at the new values.
In [88]: ser = pd.Series(np.sort(np.random.uniform(size=100)))
# interpolate at new_index
In [89]: new_index = ser.index.union(pd.Index([49.25, 49.5, 49.75, 50.25, 50.5, 50.75]))
In [90]: interp_s = ser.reindex(new_index).interpolate(method="pchip")
In [91]: interp_s[49:51]
Out[91]:
49.00 0.471410
49.25 0.476841
49.50 0.481780
49.75 0.485998
50.00 0.489266
50.25 0.491814
50.50 0.493995
50.75 0.495763
51.00 0.497074
dtype: float64
Interpolation limits#
Like other pandas fill methods, interpolate() accepts a limit keyword
argument. Use this argument to limit the number of consecutive NaN values
filled since the last valid observation:
In [92]: ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan, np.nan, 13, np.nan, np.nan])
In [93]: ser
Out[93]:
0 NaN
1 NaN
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive values in a forward direction
In [94]: ser.interpolate()
Out[94]:
0 NaN
1 NaN
2 5.0
3 7.0
4 9.0
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64
# fill one consecutive value in a forward direction
In [95]: ser.interpolate(limit=1)
Out[95]:
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 NaN
6 13.0
7 13.0
8 NaN
dtype: float64
By default, NaN values are filled in a forward direction. Use
limit_direction parameter to fill backward or from both directions.
# fill one consecutive value backwards
In [96]: ser.interpolate(limit=1, limit_direction="backward")
Out[96]:
0 NaN
1 5.0
2 5.0
3 NaN
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64
# fill one consecutive value in both directions
In [97]: ser.interpolate(limit=1, limit_direction="both")
Out[97]:
0 NaN
1 5.0
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
7 13.0
8 NaN
dtype: float64
# fill all consecutive values in both directions
In [98]: ser.interpolate(limit_direction="both")
Out[98]:
0 5.0
1 5.0
2 5.0
3 7.0
4 9.0
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64
By default, NaN values are filled whether they are inside (surrounded by)
existing valid values, or outside existing valid values. The limit_area
parameter restricts filling to either inside or outside values.
# fill one consecutive inside value in both directions
In [99]: ser.interpolate(limit_direction="both", limit_area="inside", limit=1)
Out[99]:
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive outside values backward
In [100]: ser.interpolate(limit_direction="backward", limit_area="outside")
Out[100]:
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive outside values in both directions
In [101]: ser.interpolate(limit_direction="both", limit_area="outside")
Out[101]:
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 13.0
8 13.0
dtype: float64
Replacing generic values#
Often times we want to replace arbitrary values with other values.
replace() in Series and replace() in DataFrame provides an efficient yet
flexible way to perform such replacements.
For a Series, you can replace a single value or a list of values by another
value:
In [102]: ser = pd.Series([0.0, 1.0, 2.0, 3.0, 4.0])
In [103]: ser.replace(0, 5)
Out[103]:
0 5.0
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
You can replace a list of values by a list of other values:
In [104]: ser.replace([0, 1, 2, 3, 4], [4, 3, 2, 1, 0])
Out[104]:
0 4.0
1 3.0
2 2.0
3 1.0
4 0.0
dtype: float64
You can also specify a mapping dict:
In [105]: ser.replace({0: 10, 1: 100})
Out[105]:
0 10.0
1 100.0
2 2.0
3 3.0
4 4.0
dtype: float64
For a DataFrame, you can specify individual values by column:
In [106]: df = pd.DataFrame({"a": [0, 1, 2, 3, 4], "b": [5, 6, 7, 8, 9]})
In [107]: df.replace({"a": 0, "b": 5}, 100)
Out[107]:
a b
0 100 100
1 1 6
2 2 7
3 3 8
4 4 9
Instead of replacing with specified values, you can treat all given values as
missing and interpolate over them:
In [108]: ser.replace([1, 2, 3], method="pad")
Out[108]:
0 0.0
1 0.0
2 0.0
3 0.0
4 4.0
dtype: float64
String/regular expression replacement#
Note
Python strings prefixed with the r character such as r'hello world'
are so-called “raw” strings. They have different semantics regarding
backslashes than strings without this prefix. Backslashes in raw strings
will be interpreted as an escaped backslash, e.g., r'\' == '\\'. You
should read about them
if this is unclear.
Replace the ‘.’ with NaN (str -> str):
In [109]: d = {"a": list(range(4)), "b": list("ab.."), "c": ["a", "b", np.nan, "d"]}
In [110]: df = pd.DataFrame(d)
In [111]: df.replace(".", np.nan)
Out[111]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Now do it with a regular expression that removes surrounding whitespace
(regex -> regex):
In [112]: df.replace(r"\s*\.\s*", np.nan, regex=True)
Out[112]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Replace a few different values (list -> list):
In [113]: df.replace(["a", "."], ["b", np.nan])
Out[113]:
a b c
0 0 b b
1 1 b b
2 2 NaN NaN
3 3 NaN d
list of regex -> list of regex:
In [114]: df.replace([r"\.", r"(a)"], ["dot", r"\1stuff"], regex=True)
Out[114]:
a b c
0 0 astuff astuff
1 1 b b
2 2 dot NaN
3 3 dot d
Only search in column 'b' (dict -> dict):
In [115]: df.replace({"b": "."}, {"b": np.nan})
Out[115]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Same as the previous example, but use a regular expression for
searching instead (dict of regex -> dict):
In [116]: df.replace({"b": r"\s*\.\s*"}, {"b": np.nan}, regex=True)
Out[116]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
You can pass nested dictionaries of regular expressions that use regex=True:
In [117]: df.replace({"b": {"b": r""}}, regex=True)
Out[117]:
a b c
0 0 a a
1 1 b
2 2 . NaN
3 3 . d
Alternatively, you can pass the nested dictionary like so:
In [118]: df.replace(regex={"b": {r"\s*\.\s*": np.nan}})
Out[118]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
You can also use the group of a regular expression match when replacing (dict
of regex -> dict of regex), this works for lists as well.
In [119]: df.replace({"b": r"\s*(\.)\s*"}, {"b": r"\1ty"}, regex=True)
Out[119]:
a b c
0 0 a a
1 1 b b
2 2 .ty NaN
3 3 .ty d
You can pass a list of regular expressions, of which those that match
will be replaced with a scalar (list of regex -> regex).
In [120]: df.replace([r"\s*\.\s*", r"a|b"], np.nan, regex=True)
Out[120]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
All of the regular expression examples can also be passed with the
to_replace argument as the regex argument. In this case the value
argument must be passed explicitly by name or regex must be a nested
dictionary. The previous example, in this case, would then be:
In [121]: df.replace(regex=[r"\s*\.\s*", r"a|b"], value=np.nan)
Out[121]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
This can be convenient if you do not want to pass regex=True every time you
want to use a regular expression.
Note
Anywhere in the above replace examples that you see a regular expression
a compiled regular expression is valid as well.
Numeric replacement#
replace() is similar to fillna().
In [122]: df = pd.DataFrame(np.random.randn(10, 2))
In [123]: df[np.random.rand(df.shape[0]) > 0.5] = 1.5
In [124]: df.replace(1.5, np.nan)
Out[124]:
0 1
0 -0.844214 -1.021415
1 0.432396 -0.323580
2 0.423825 0.799180
3 1.262614 0.751965
4 NaN NaN
5 NaN NaN
6 -0.498174 -1.060799
7 0.591667 -0.183257
8 1.019855 -1.482465
9 NaN NaN
Replacing more than one value is possible by passing a list.
In [125]: df00 = df.iloc[0, 0]
In [126]: df.replace([1.5, df00], [np.nan, "a"])
Out[126]:
0 1
0 a -1.021415
1 0.432396 -0.323580
2 0.423825 0.799180
3 1.262614 0.751965
4 NaN NaN
5 NaN NaN
6 -0.498174 -1.060799
7 0.591667 -0.183257
8 1.019855 -1.482465
9 NaN NaN
In [127]: df[1].dtype
Out[127]: dtype('float64')
You can also operate on the DataFrame in place:
In [128]: df.replace(1.5, np.nan, inplace=True)
Missing data casting rules and indexing#
While pandas supports storing arrays of integer and boolean type, these types
are not capable of storing missing data. Until we can switch to using a native
NA type in NumPy, we’ve established some “casting rules”. When a reindexing
operation introduces missing data, the Series will be cast according to the
rules introduced in the table below.
data type
Cast to
integer
float
boolean
object
float
no cast
object
no cast
For example:
In [129]: s = pd.Series(np.random.randn(5), index=[0, 2, 4, 6, 7])
In [130]: s > 0
Out[130]:
0 True
2 True
4 True
6 True
7 True
dtype: bool
In [131]: (s > 0).dtype
Out[131]: dtype('bool')
In [132]: crit = (s > 0).reindex(list(range(8)))
In [133]: crit
Out[133]:
0 True
1 NaN
2 True
3 NaN
4 True
5 NaN
6 True
7 True
dtype: object
In [134]: crit.dtype
Out[134]: dtype('O')
Ordinarily NumPy will complain if you try to use an object array (even if it
contains boolean values) instead of a boolean array to get or set values from
an ndarray (e.g. selecting values based on some criteria). If a boolean vector
contains NAs, an exception will be generated:
In [135]: reindexed = s.reindex(list(range(8))).fillna(0)
In [136]: reindexed[crit]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[136], line 1
----> 1 reindexed[crit]
File ~/work/pandas/pandas/pandas/core/series.py:1002, in Series.__getitem__(self, key)
999 if is_iterator(key):
1000 key = list(key)
-> 1002 if com.is_bool_indexer(key):
1003 key = check_bool_indexer(self.index, key)
1004 key = np.asarray(key, dtype=bool)
File ~/work/pandas/pandas/pandas/core/common.py:135, in is_bool_indexer(key)
131 na_msg = "Cannot mask with non-boolean array containing NA / NaN values"
132 if lib.infer_dtype(key_array) == "boolean" and isna(key_array).any():
133 # Don't raise on e.g. ["A", "B", np.nan], see
134 # test_loc_getitem_list_of_labels_categoricalindex_with_na
--> 135 raise ValueError(na_msg)
136 return False
137 return True
ValueError: Cannot mask with non-boolean array containing NA / NaN values
However, these can be filled in using fillna() and it will work fine:
In [137]: reindexed[crit.fillna(False)]
Out[137]:
0 0.126504
2 0.696198
4 0.697416
6 0.601516
7 0.003659
dtype: float64
In [138]: reindexed[crit.fillna(True)]
Out[138]:
0 0.126504
1 0.000000
2 0.696198
3 0.000000
4 0.697416
5 0.000000
6 0.601516
7 0.003659
dtype: float64
pandas provides a nullable integer dtype, but you must explicitly request it
when creating the series or column. Notice that we use a capital “I” in
the dtype="Int64".
In [139]: s = pd.Series([0, 1, np.nan, 3, 4], dtype="Int64")
In [140]: s
Out[140]:
0 0
1 1
2 <NA>
3 3
4 4
dtype: Int64
See Nullable integer data type for more.
Experimental NA scalar to denote missing values#
Warning
Experimental: the behaviour of pd.NA can still change without warning.
New in version 1.0.0.
Starting from pandas 1.0, an experimental pd.NA value (singleton) is
available to represent scalar missing values. At this moment, it is used in
the nullable integer, boolean and
dedicated string data types as the missing value indicator.
The goal of pd.NA is provide a “missing” indicator that can be used
consistently across data types (instead of np.nan, None or pd.NaT
depending on the data type).
For example, when having missing values in a Series with the nullable integer
dtype, it will use pd.NA:
In [141]: s = pd.Series([1, 2, None], dtype="Int64")
In [142]: s
Out[142]:
0 1
1 2
2 <NA>
dtype: Int64
In [143]: s[2]
Out[143]: <NA>
In [144]: s[2] is pd.NA
Out[144]: True
Currently, pandas does not yet use those data types by default (when creating
a DataFrame or Series, or when reading in data), so you need to specify
the dtype explicitly. An easy way to convert to those dtypes is explained
here.
Propagation in arithmetic and comparison operations#
In general, missing values propagate in operations involving pd.NA. When
one of the operands is unknown, the outcome of the operation is also unknown.
For example, pd.NA propagates in arithmetic operations, similarly to
np.nan:
In [145]: pd.NA + 1
Out[145]: <NA>
In [146]: "a" * pd.NA
Out[146]: <NA>
There are a few special cases when the result is known, even when one of the
operands is NA.
In [147]: pd.NA ** 0
Out[147]: 1
In [148]: 1 ** pd.NA
Out[148]: 1
In equality and comparison operations, pd.NA also propagates. This deviates
from the behaviour of np.nan, where comparisons with np.nan always
return False.
In [149]: pd.NA == 1
Out[149]: <NA>
In [150]: pd.NA == pd.NA
Out[150]: <NA>
In [151]: pd.NA < 2.5
Out[151]: <NA>
To check if a value is equal to pd.NA, the isna() function can be
used:
In [152]: pd.isna(pd.NA)
Out[152]: True
An exception on this basic propagation rule are reductions (such as the
mean or the minimum), where pandas defaults to skipping missing values. See
above for more.
Logical operations#
For logical operations, pd.NA follows the rules of the
three-valued logic (or
Kleene logic, similarly to R, SQL and Julia). This logic means to only
propagate missing values when it is logically required.
For example, for the logical “or” operation (|), if one of the operands
is True, we already know the result will be True, regardless of the
other value (so regardless the missing value would be True or False).
In this case, pd.NA does not propagate:
In [153]: True | False
Out[153]: True
In [154]: True | pd.NA
Out[154]: True
In [155]: pd.NA | True
Out[155]: True
On the other hand, if one of the operands is False, the result depends
on the value of the other operand. Therefore, in this case pd.NA
propagates:
In [156]: False | True
Out[156]: True
In [157]: False | False
Out[157]: False
In [158]: False | pd.NA
Out[158]: <NA>
The behaviour of the logical “and” operation (&) can be derived using
similar logic (where now pd.NA will not propagate if one of the operands
is already False):
In [159]: False & True
Out[159]: False
In [160]: False & False
Out[160]: False
In [161]: False & pd.NA
Out[161]: False
In [162]: True & True
Out[162]: True
In [163]: True & False
Out[163]: False
In [164]: True & pd.NA
Out[164]: <NA>
NA in a boolean context#
Since the actual value of an NA is unknown, it is ambiguous to convert NA
to a boolean value. The following raises an error:
In [165]: bool(pd.NA)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[165], line 1
----> 1 bool(pd.NA)
File ~/work/pandas/pandas/pandas/_libs/missing.pyx:382, in pandas._libs.missing.NAType.__bool__()
TypeError: boolean value of NA is ambiguous
This also means that pd.NA cannot be used in a context where it is
evaluated to a boolean, such as if condition: ... where condition can
potentially be pd.NA. In such cases, isna() can be used to check
for pd.NA or condition being pd.NA can be avoided, for example by
filling missing values beforehand.
A similar situation occurs when using Series or DataFrame objects in if
statements, see Using if/truth statements with pandas.
NumPy ufuncs#
pandas.NA implements NumPy’s __array_ufunc__ protocol. Most ufuncs
work with NA, and generally return NA:
In [166]: np.log(pd.NA)
Out[166]: <NA>
In [167]: np.add(pd.NA, 1)
Out[167]: <NA>
Warning
Currently, ufuncs involving an ndarray and NA will return an
object-dtype filled with NA values.
In [168]: a = np.array([1, 2, 3])
In [169]: np.greater(a, pd.NA)
Out[169]: array([<NA>, <NA>, <NA>], dtype=object)
The return type here may change to return a different array type
in the future.
See DataFrame interoperability with NumPy functions for more on ufuncs.
Conversion#
If you have a DataFrame or Series using traditional types that have missing data
represented using np.nan, there are convenience methods
convert_dtypes() in Series and convert_dtypes()
in DataFrame that can convert data to use the newer dtypes for integers, strings and
booleans listed here. This is especially helpful after reading
in data sets when letting the readers such as read_csv() and read_excel()
infer default dtypes.
In this example, while the dtypes of all columns are changed, we show the results for
the first 10 columns.
In [170]: bb = pd.read_csv("data/baseball.csv", index_col="id")
In [171]: bb[bb.columns[:10]].dtypes
Out[171]:
player object
year int64
stint int64
team object
lg object
g int64
ab int64
r int64
h int64
X2b int64
dtype: object
In [172]: bbn = bb.convert_dtypes()
In [173]: bbn[bbn.columns[:10]].dtypes
Out[173]:
player string
year Int64
stint Int64
team string
lg string
g Int64
ab Int64
r Int64
h Int64
X2b Int64
dtype: object
| 477
| 717
|
How to fill in multiple boolean criteria in Pandas
I found an alternate answer here link
My dataframe currently looks like this
date new_cases new_deaths new_tests year month day weekday
0 2019-12-31 0.0 0.0 NaN 2019 12 31 1
1 2020-01-01 0.0 0.0 NaN 2020 1 1 2
2 2020-01-02 0.0 0.0 NaN 2020 1 2 3
I want to pass an code which will average out the 'new_cases' for weekday and weekend
my code currently look like this but I can only pass 1 condition which is ' == 6 '. I want to pass multiple criteria, for example, == (4,5,6)
covid_df[covid_df.weekday == 6].new_cases.mean()
Any clue?
|
60,262,687
|
Group multiple columns and rank within the group
|
<p><strong>Hi</strong>,</p>
<p>I am looking set ranking within group, not able to figure out how to do it. </p>
<pre><code>Below is the test data,
df = pd.DataFrame()
names = ['California','New York','California','New York', 'California','New York', 'California', 'California', 'California', 'California']
types = ['Student','Student','Student','Student','Student', 'Pleasure', 'Pleasure', 'Pleasure','Business', 'Business']
df['names'] = names
df['type'] = types
df.groupby(['names', 'type']).size().reset_index(name='counts')
Below is the output
names type counts
0 California Business 2
1 California Pleasure 2
2 California Student 3
3 New York Pleasure 1
4 New York Student 2
</code></pre>
<p>I would like to get the below output, Ranking is based on columns names and counts(desc). In California type Business & Pleasure has the same counts, for me it doesn't matter if the result rank is 2,3 or 2,2.</p>
<pre><code> names type counts Rank
0 California Business 2 2
1 California Pleasure 2 2
2 California Student 3 1
3 New York Pleasure 1 2
4 New York Student 2 1
</code></pre>
<p>Any ideas/solutions</p>
<p>Thanks</p>
| 60,262,721
| 2020-02-17T12:39:30.070000
| 1
| null | 1
| 82
|
python|pandas
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.rank.html" rel="nofollow noreferrer"><code>GroupBy.rank</code></a> with casting floats to integers by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.astype.html" rel="nofollow noreferrer"><code>Series.astype</code></a>:</p>
<pre><code>df['Rank'] = df.groupby('names')['counts'].rank(ascending=False, method='dense').astype(int)
print(df)
names type counts Rank
0 California Business 2 2
1 California Pleasure 2 2
2 California Student 3 1
3 New York Pleasure 1 2
4 New York Student 2 1
</code></pre>
| 2020-02-17T12:41:37.860000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rank.html
|
pandas.DataFrame.rank#
pandas.DataFrame.rank#
DataFrame.rank(axis=0, method='average', numeric_only=_NoDefault.no_default, na_option='keep', ascending=True, pct=False)[source]#
Compute numerical data ranks (1 through n) along axis.
By default, equal values are assigned a rank that is the average of the
ranks of those values.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0Index to direct ranking.
For Series this parameter is unused and defaults to 0.
method{‘average’, ‘min’, ‘max’, ‘first’, ‘dense’}, default ‘average’How to rank the group of records that have the same value (i.e. ties):
average: average rank of the group
min: lowest rank in the group
max: highest rank in the group
first: ranks assigned in order they appear in the array
dense: like ‘min’, but rank always increases by 1 between groups.
numeric_onlybool, optionalFor DataFrame objects, rank only numeric columns if set to True.
Use GroupBy.rank with casting floats to integers by Series.astype:
df['Rank'] = df.groupby('names')['counts'].rank(ascending=False, method='dense').astype(int)
print(df)
names type counts Rank
0 California Business 2 2
1 California Pleasure 2 2
2 California Student 3 1
3 New York Pleasure 1 2
4 New York Student 2 1
na_option{‘keep’, ‘top’, ‘bottom’}, default ‘keep’How to rank NaN values:
keep: assign NaN rank to NaN values
top: assign lowest rank to NaN values
bottom: assign highest rank to NaN values
ascendingbool, default TrueWhether or not the elements should be ranked in ascending order.
pctbool, default FalseWhether or not to display the returned rankings in percentile
form.
Returns
same type as callerReturn a Series or DataFrame with data ranks as values.
See also
core.groupby.GroupBy.rankRank of values within each group.
Examples
>>> df = pd.DataFrame(data={'Animal': ['cat', 'penguin', 'dog',
... 'spider', 'snake'],
... 'Number_legs': [4, 2, 4, 8, np.nan]})
>>> df
Animal Number_legs
0 cat 4.0
1 penguin 2.0
2 dog 4.0
3 spider 8.0
4 snake NaN
Ties are assigned the mean of the ranks (by default) for the group.
>>> s = pd.Series(range(5), index=list("abcde"))
>>> s["d"] = s["b"]
>>> s.rank()
a 1.0
b 2.5
c 4.0
d 2.5
e 5.0
dtype: float64
The following example shows how the method behaves with the above
parameters:
default_rank: this is the default behaviour obtained without using
any parameter.
max_rank: setting method = 'max' the records that have the
same values are ranked using the highest rank (e.g.: since ‘cat’
and ‘dog’ are both in the 2nd and 3rd position, rank 3 is assigned.)
NA_bottom: choosing na_option = 'bottom', if there are records
with NaN values they are placed at the bottom of the ranking.
pct_rank: when setting pct = True, the ranking is expressed as
percentile rank.
>>> df['default_rank'] = df['Number_legs'].rank()
>>> df['max_rank'] = df['Number_legs'].rank(method='max')
>>> df['NA_bottom'] = df['Number_legs'].rank(na_option='bottom')
>>> df['pct_rank'] = df['Number_legs'].rank(pct=True)
>>> df
Animal Number_legs default_rank max_rank NA_bottom pct_rank
0 cat 4.0 2.5 3.0 2.5 0.625
1 penguin 2.0 1.0 1.0 1.0 0.250
2 dog 4.0 2.5 3.0 2.5 0.625
3 spider 8.0 4.0 4.0 4.0 1.000
4 snake NaN NaN NaN 5.0 NaN
| 921
| 1,319
|
Group multiple columns and rank within the group
Hi,
I am looking set ranking within group, not able to figure out how to do it.
Below is the test data,
df = pd.DataFrame()
names = ['California','New York','California','New York', 'California','New York', 'California', 'California', 'California', 'California']
types = ['Student','Student','Student','Student','Student', 'Pleasure', 'Pleasure', 'Pleasure','Business', 'Business']
df['names'] = names
df['type'] = types
df.groupby(['names', 'type']).size().reset_index(name='counts')
Below is the output
names type counts
0 California Business 2
1 California Pleasure 2
2 California Student 3
3 New York Pleasure 1
4 New York Student 2
I would like to get the below output, Ranking is based on columns names and counts(desc). In California type Business & Pleasure has the same counts, for me it doesn't matter if the result rank is 2,3 or 2,2.
names type counts Rank
0 California Business 2 2
1 California Pleasure 2 2
2 California Student 3 1
3 New York Pleasure 1 2
4 New York Student 2 1
Any ideas/solutions
Thanks
|
68,114,989
|
Pandas: Add missing dates up to current date
|
<p>Dataframe Example</p>
<pre class="lang-plaintext prettyprint-override"><code> Date Code Count_1 Count_2
01-01-2021 A 5 4
04-01-2021 A 5 5
05-01-2021 A 7 5
07-01-2021 A 6 7
05-02-2021 B 8 7
09-02-2021 B 4 3
10-02-2021 B 4 5
11-02-2021 B 6 7
.
.
.
</code></pre>
<p>How do I add all missing values(if any) between <code>starting date</code>(01-01-2021) and <code>current date</code>(grouped by <code>Code</code>), while value in <code>Count_1</code> and <code>Count_2</code> equals NaN for relevant missing dates.</p>
| 68,115,053
| 2021-06-24T11:36:18.957000
| 1
| null | 3
| 83
|
python|pandas
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> with unique <code>code</code>s and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer"><code>date_range</code></a> for all combinations of pairs dates and codes:</p>
<pre><code>df['Date'] = pd.to_datetime(df['Date'])
idx = pd.date_range('01-01-2021', pd.to_datetime('now').normalize())
codes = df['Code'].unique()
df1 = (df.set_index(['Date','Code'])
.reindex(pd.MultiIndex.from_product([idx, codes], names=['Date','Code']))
.reset_index())
print (df1)
Date Code Count_1 Count_2
0 2021-01-01 A 5.0 4.0
1 2021-01-01 B NaN NaN
2 2021-01-02 A NaN NaN
3 2021-01-02 B NaN NaN
4 2021-01-03 A NaN NaN
.. ... ... ... ...
345 2021-06-22 B NaN NaN
346 2021-06-23 A NaN NaN
347 2021-06-23 B NaN NaN
348 2021-06-24 A NaN NaN
349 2021-06-24 B NaN NaN
[350 rows x 4 columns]
</code></pre>
| 2021-06-24T11:40:44.163000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reindex.html
|
pandas.DataFrame.reindex#
pandas.DataFrame.reindex#
DataFrame.reindex(labels=None, index=None, columns=None, axis=None, method=None, copy=None, level=None, fill_value=nan, limit=None, tolerance=None)[source]#
Conform Series/DataFrame to new index with optional filling logic.
Places NA/NaN in locations having no value in the previous index. A new object
Use DataFrame.reindex with unique codes and date_range for all combinations of pairs dates and codes:
df['Date'] = pd.to_datetime(df['Date'])
idx = pd.date_range('01-01-2021', pd.to_datetime('now').normalize())
codes = df['Code'].unique()
df1 = (df.set_index(['Date','Code'])
.reindex(pd.MultiIndex.from_product([idx, codes], names=['Date','Code']))
.reset_index())
print (df1)
Date Code Count_1 Count_2
0 2021-01-01 A 5.0 4.0
1 2021-01-01 B NaN NaN
2 2021-01-02 A NaN NaN
3 2021-01-02 B NaN NaN
4 2021-01-03 A NaN NaN
.. ... ... ... ...
345 2021-06-22 B NaN NaN
346 2021-06-23 A NaN NaN
347 2021-06-23 B NaN NaN
348 2021-06-24 A NaN NaN
349 2021-06-24 B NaN NaN
[350 rows x 4 columns]
is produced unless the new index is equivalent to the current one and
copy=False.
Parameters
keywords for axesarray-like, optionalNew labels / index to conform to, should be specified using
keywords. Preferably an Index object to avoid duplicating data.
method{None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}Method to use for filling holes in reindexed DataFrame.
Please note: this is only applicable to DataFrames/Series with a
monotonically increasing/decreasing index.
None (default): don’t fill gaps
pad / ffill: Propagate last valid observation forward to next
valid.
backfill / bfill: Use next valid observation to fill gap.
nearest: Use nearest valid observations to fill gap.
copybool, default TrueReturn a new object, even if the passed indexes are the same.
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuescalar, default np.NaNValue to use for missing values. Defaults to NaN, but can be any
“compatible” value.
limitint, default NoneMaximum number of consecutive elements to forward or backward fill.
toleranceoptionalMaximum distance between original and new labels for inexact
matches. The values of the index at the matching locations most
satisfy the equation abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance
to all values, or list-like, which applies variable tolerance per
element. List-like includes list, tuple, array, Series, and must be
the same size as the index and its dtype must exactly match the
index’s type.
Returns
Series/DataFrame with changed index.
See also
DataFrame.set_indexSet row labels.
DataFrame.reset_indexRemove row labels or move them to new columns.
DataFrame.reindex_likeChange to same indices as other DataFrame.
Examples
DataFrame.reindex supports two calling conventions
(index=index_labels, columns=column_labels, ...)
(labels, axis={'index', 'columns'}, ...)
We highly recommend using keyword arguments to clarify your
intent.
Create a dataframe with some fictional data.
>>> index = ['Firefox', 'Chrome', 'Safari', 'IE10', 'Konqueror']
>>> df = pd.DataFrame({'http_status': [200, 200, 404, 404, 301],
... 'response_time': [0.04, 0.02, 0.07, 0.08, 1.0]},
... index=index)
>>> df
http_status response_time
Firefox 200 0.04
Chrome 200 0.02
Safari 404 0.07
IE10 404 0.08
Konqueror 301 1.00
Create a new index and reindex the dataframe. By default
values in the new index that do not have corresponding
records in the dataframe are assigned NaN.
>>> new_index = ['Safari', 'Iceweasel', 'Comodo Dragon', 'IE10',
... 'Chrome']
>>> df.reindex(new_index)
http_status response_time
Safari 404.0 0.07
Iceweasel NaN NaN
Comodo Dragon NaN NaN
IE10 404.0 0.08
Chrome 200.0 0.02
We can fill in the missing values by passing a value to
the keyword fill_value. Because the index is not monotonically
increasing or decreasing, we cannot use arguments to the keyword
method to fill the NaN values.
>>> df.reindex(new_index, fill_value=0)
http_status response_time
Safari 404 0.07
Iceweasel 0 0.00
Comodo Dragon 0 0.00
IE10 404 0.08
Chrome 200 0.02
>>> df.reindex(new_index, fill_value='missing')
http_status response_time
Safari 404 0.07
Iceweasel missing missing
Comodo Dragon missing missing
IE10 404 0.08
Chrome 200 0.02
We can also reindex the columns.
>>> df.reindex(columns=['http_status', 'user_agent'])
http_status user_agent
Firefox 200 NaN
Chrome 200 NaN
Safari 404 NaN
IE10 404 NaN
Konqueror 301 NaN
Or we can use “axis-style” keyword arguments
>>> df.reindex(['http_status', 'user_agent'], axis="columns")
http_status user_agent
Firefox 200 NaN
Chrome 200 NaN
Safari 404 NaN
IE10 404 NaN
Konqueror 301 NaN
To further illustrate the filling functionality in
reindex, we will create a dataframe with a
monotonically increasing index (for example, a sequence
of dates).
>>> date_index = pd.date_range('1/1/2010', periods=6, freq='D')
>>> df2 = pd.DataFrame({"prices": [100, 101, np.nan, 100, 89, 88]},
... index=date_index)
>>> df2
prices
2010-01-01 100.0
2010-01-02 101.0
2010-01-03 NaN
2010-01-04 100.0
2010-01-05 89.0
2010-01-06 88.0
Suppose we decide to expand the dataframe to cover a wider
date range.
>>> date_index2 = pd.date_range('12/29/2009', periods=10, freq='D')
>>> df2.reindex(date_index2)
prices
2009-12-29 NaN
2009-12-30 NaN
2009-12-31 NaN
2010-01-01 100.0
2010-01-02 101.0
2010-01-03 NaN
2010-01-04 100.0
2010-01-05 89.0
2010-01-06 88.0
2010-01-07 NaN
The index entries that did not have a value in the original data frame
(for example, ‘2009-12-29’) are by default filled with NaN.
If desired, we can fill in the missing values using one of several
options.
For example, to back-propagate the last valid value to fill the NaN
values, pass bfill as an argument to the method keyword.
>>> df2.reindex(date_index2, method='bfill')
prices
2009-12-29 100.0
2009-12-30 100.0
2009-12-31 100.0
2010-01-01 100.0
2010-01-02 101.0
2010-01-03 NaN
2010-01-04 100.0
2010-01-05 89.0
2010-01-06 88.0
2010-01-07 NaN
Please note that the NaN value present in the original dataframe
(at index value 2010-01-03) will not be filled by any of the
value propagation schemes. This is because filling while reindexing
does not look at dataframe values, but only compares the original and
desired indexes. If you do want to fill in the NaN values present
in the original dataframe, use the fillna() method.
See the user guide for more.
| 359
| 1,237
|
Pandas: Add missing dates up to current date
Dataframe Example
Date Code Count_1 Count_2
01-01-2021 A 5 4
04-01-2021 A 5 5
05-01-2021 A 7 5
07-01-2021 A 6 7
05-02-2021 B 8 7
09-02-2021 B 4 3
10-02-2021 B 4 5
11-02-2021 B 6 7
.
.
.
How do I add all missing values(if any) between starting date(01-01-2021) and current date(grouped by Code), while value in Count_1 and Count_2 equals NaN for relevant missing dates.
|
68,076,609
|
Python - How to compare the values for row and previous row and return time difference based on condition?
|
<p>I have dataset that contians TimeStamp, CustomerID and Session ID.
As you can see the clientId and Session_ID do repeat over the time.</p>
<pre><code>Timestamp clientId Session_ID
0 6/12/2021 15:05 27255667 ab89
1 6/12/2021 19:56 118698247 684a
2 6/12/2021 23:59 99492237 a4fd
3 6/12/2021 23:59 99492237 a4fd
4 6/12/2021 23:59 99492237 a4fd
5 6/12/2021 23:59 99492237 a4fd
6 6/13/2021 0:06 99492237 a5fd
7 6/13/2021 0:41 142584462 c23000
8 6/13/2021 23:33 142584462 c23000
9 6/13/2021 23:33 142584462 c23000
10 6/13/2021 23:33 142584462 c23000
11 6/13/2021 23:34 142584462 c23000
12 6/13/2021 23:34 142584462 c23000
13 6/13/2021 23:34 142584462 7d97
</code></pre>
<p>I need to find the instnaces where clientId gets new SessionID and then i need to calculate the time differnce between previous Session_ID and the new Session_Id for the same client.</p>
<p>For example client_ID:99492237 had 3 same Session and then 4th one was different. The time difference would be 6/13/2021 0:06 - 6/12/2021 23:59</p>
<p>This is what I tried so far:</p>
<pre><code># importing dependencies
import pandas as pd
import numpy as np
# importing data from SampleData.csv
df = pd.read_csv('data/SampleData.csv' ,converters={'clientId':str})
#converting timestamp string into timestap datatype
df["Time"] = pd.to_datetime(df["Timestamp"])
df.sort_values(["Timestamp", "clientId"], ascending = (True, True))
df["SameSession?"] = np.where((df['clientId'] == df['clientId'].shift(-1)) & (df['Session_ID'] == df['Session_ID'].shift(-1)), "YES", "NO")
minMax = df.groupby('clientId').agg(minTime=('Time', 'min'), maxTime=('Time', 'max'))
minMax['Diff'] = minMax['maxTime'] - minMax['minTime']
df = df.merge(minMax[['Diff']], on='clientId')
</code></pre>
<p>So I tried sorting everything by Time and ClientID so I can get same ClientId one after another. Than i tried comparing the row to row above for ClientID and Session. IF both are same, return Yes,if one is different than show NO. And then i did calculation between First and Last instace of ClientID. But i am getting wrong values when i compare rows to row above.</p>
<p>Here is the output</p>
<pre><code> Timestamp clientId Session_ID Time SameSession? Diff
0 6/12/2021 15:05 27255667 ab89 2021-06-12 15:05:00 NO 0 days 00:00:00
1 6/12/2021 19:56 118698247 684a 2021-06-12 19:56:00 NO 0 days 00:00:00
2 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days 00:07:00
3 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days 00:07:00
4 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days 00:07:00
5 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 NO 0 days 00:07:00
6 6/13/2021 0:06 99492237 a5fd 2021-06-13 00:06:00 NO 0 days 00:07:00
7 6/13/2021 0:41 142584462 c23000 2021-06-13 00:41:00 YES 0 days 22:53:00
8 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days 22:53:00
9 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days 22:53:00
10 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days 22:53:00
11 6/13/2021 23:34 142584462 c23000 2021-06-13 23:34:00 YES 0 days 22:53:00
12 6/13/2021 23:34 142584462 c23000 2021-06-13 23:34:00 NO 0 days 22:53:00
13 6/13/2021 23:34 142584462 7d97 2021-06-13 23:34:00 NO 0 days 22:53:00
</code></pre>
<p>SO how do I need to adjust my code so i can show the new Session_ID for the same ClientId and show the time difference between new Session and previous session for the same client?</p>
<p>The output i expect to get should look like this:</p>
<pre><code> Timestamp clientId Session_ID Time SameSession? Diff
0 6/12/2021 15:05 27255667 ab89 2021-06-12 15:05:00 NO 0 days
1 6/12/2021 19:56 118698247 684a 2021-06-12 19:56:00 NO 0 days
2 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days
3 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days
4 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days
5 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days
6 6/13/2021 0:06 99492237 a5fd 2021-06-13 00:06:00 NO 0 days 00:07:00
7 6/13/2021 0:41 142584462 c23000 2021-06-13 00:41:00 NO 0 days
8 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days
9 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days
10 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days
11 6/13/2021 23:34 142584462 c23000 2021-06-13 23:34:00 YES 0 days
12 6/13/2021 23:34 142584462 c23000 2021-06-13 23:34:00 YES 0 days
13 6/13/2021 23:34 142584462 7d97 2021-06-13 23:34:00 NO 0 days 22:53:00
</code></pre>
| 68,077,445
| 2021-06-22T02:02:25.683000
| 1
| null | 0
| 85
|
python|pandas
|
<p>To get correct values in SameSession you need <code>shift(1)</code> instead of <code>shift(-1)</code></p>
<hr />
<p>Your <code>Diff</code> has type <code>Timedelta</code> and you would need to convert it to string to get it in <code>excel</code>.</p>
<hr />
<p>I would create column with default value <code>0 days</code></p>
<pre><code> df['Diff'] = '0 days'
</code></pre>
<p>and later work with every group to put new value only in last row in group</p>
<pre><code>groups = df.groupby('clientId')
for val, group in groups:
diff = group['Time'].max() - group['Time'].min()
last_index = group.index[-1]
if diff.total_seconds() > 0:
df.loc[last_row,'Diff'] = str(diff)
#else:
# df.loc[last_row,'Diff'] = '0 days'
</code></pre>
<hr />
<p>Minimal working code</p>
<pre><code>text = '''Timestamp,clientId,Session_ID
6/12/2021 15:05,27255667,ab89
6/12/2021 19:56,118698247,684a
6/12/2021 23:59,99492237,a4fd
6/12/2021 23:59,99492237,a4fd
6/12/2021 23:59,99492237,a4fd
6/12/2021 23:59,99492237,a4fd
6/13/2021 00:06,99492237,a5fd
6/13/2021 00:41,142584462,c23000
6/13/2021 23:33,142584462,c23000
6/13/2021 23:33,142584462,c23000
6/13/2021 23:33,142584462,c23000
6/13/2021 23:34,142584462,c23000
6/13/2021 23:34,142584462,c23000
6/13/2021 23:34,142584462,7d97'''
import pandas as pd
import numpy as np
import io
# importing data from SampleData.csv
#df = pd.read_csv('data/SampleData.csv', converters={'clientId':str})
df = pd.read_csv(io.StringIO(text), converters={'clientId':str})
#print(df)
df["Time"] = pd.to_datetime(df["Timestamp"])
df.sort_values(["Timestamp", "clientId"], ascending = (True, True))
df["SameSession?"] = np.where((df['clientId'] == df['clientId'].shift(1)) & (df['Session_ID'] == df['Session_ID'].shift(1)), "YES", "NO")
# create column with default value
df['Diff'] = '0 days'
#print(df)
groups = df.groupby('clientId')
for val, group in groups:
diff = group['Time'].max() - group['Time'].min()
last_index = group.index[-1]
if diff.total_seconds() > 0:
df.loc[last_index,'Diff'] = str(diff)
#else:
# df.loc[last_index,'Diff'] = '0 days'
print(df[['clientId', 'Diff']])
</code></pre>
<p>Result:</p>
<pre><code> clientId Diff
0 27255667 0 days
1 118698247 0 days
2 99492237 0 days
3 99492237 0 days
4 99492237 0 days
5 99492237 0 days
6 99492237 0 days 00:07:00
7 142584462 0 days
8 142584462 0 days
9 142584462 0 days
10 142584462 0 days
11 142584462 0 days
12 142584462 0 days
13 142584462 0 days 22:53:00
</code></pre>
| 2021-06-22T04:12:55.790000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.diff.html
|
To get correct values in SameSession you need shift(1) instead of shift(-1)
Your Diff has type Timedelta and you would need to convert it to string to get it in excel.
I would create column with default value 0 days
df['Diff'] = '0 days'
and later work with every group to put new value only in last row in group
groups = df.groupby('clientId')
for val, group in groups:
diff = group['Time'].max() - group['Time'].min()
last_index = group.index[-1]
if diff.total_seconds() > 0:
df.loc[last_row,'Diff'] = str(diff)
#else:
# df.loc[last_row,'Diff'] = '0 days'
Minimal working code
text = '''Timestamp,clientId,Session_ID
6/12/2021 15:05,27255667,ab89
6/12/2021 19:56,118698247,684a
6/12/2021 23:59,99492237,a4fd
6/12/2021 23:59,99492237,a4fd
6/12/2021 23:59,99492237,a4fd
6/12/2021 23:59,99492237,a4fd
6/13/2021 00:06,99492237,a5fd
6/13/2021 00:41,142584462,c23000
6/13/2021 23:33,142584462,c23000
6/13/2021 23:33,142584462,c23000
6/13/2021 23:33,142584462,c23000
6/13/2021 23:34,142584462,c23000
6/13/2021 23:34,142584462,c23000
6/13/2021 23:34,142584462,7d97'''
import pandas as pd
import numpy as np
import io
# importing data from SampleData.csv
#df = pd.read_csv('data/SampleData.csv', converters={'clientId':str})
df = pd.read_csv(io.StringIO(text), converters={'clientId':str})
#print(df)
df["Time"] = pd.to_datetime(df["Timestamp"])
df.sort_values(["Timestamp", "clientId"], ascending = (True, True))
df["SameSession?"] = np.where((df['clientId'] == df['clientId'].shift(1)) & (df['Session_ID'] == df['Session_ID'].shift(1)), "YES", "NO")
# create column with default value
df['Diff'] = '0 days'
#print(df)
groups = df.groupby('clientId')
for val, group in groups:
diff = group['Time'].max() - group['Time'].min()
last_index = group.index[-1]
if diff.total_seconds() > 0:
df.loc[last_index,'Diff'] = str(diff)
#else:
# df.loc[last_index,'Diff'] = '0 days'
print(df[['clientId', 'Diff']])
Result:
clientId Diff
0 27255667 0 days
1 118698247 0 days
2 99492237 0 days
3 99492237 0 days
4 99492237 0 days
5 99492237 0 days
6 99492237 0 days 00:07:00
7 142584462 0 days
8 142584462 0 days
9 142584462 0 days
10 142584462 0 days
11 142584462 0 days
12 142584462 0 days
13 142584462 0 days 22:53:00
| 0
| 2,477
|
Python - How to compare the values for row and previous row and return time difference based on condition?
I have dataset that contians TimeStamp, CustomerID and Session ID.
As you can see the clientId and Session_ID do repeat over the time.
Timestamp clientId Session_ID
0 6/12/2021 15:05 27255667 ab89
1 6/12/2021 19:56 118698247 684a
2 6/12/2021 23:59 99492237 a4fd
3 6/12/2021 23:59 99492237 a4fd
4 6/12/2021 23:59 99492237 a4fd
5 6/12/2021 23:59 99492237 a4fd
6 6/13/2021 0:06 99492237 a5fd
7 6/13/2021 0:41 142584462 c23000
8 6/13/2021 23:33 142584462 c23000
9 6/13/2021 23:33 142584462 c23000
10 6/13/2021 23:33 142584462 c23000
11 6/13/2021 23:34 142584462 c23000
12 6/13/2021 23:34 142584462 c23000
13 6/13/2021 23:34 142584462 7d97
I need to find the instnaces where clientId gets new SessionID and then i need to calculate the time differnce between previous Session_ID and the new Session_Id for the same client.
For example client_ID:99492237 had 3 same Session and then 4th one was different. The time difference would be 6/13/2021 0:06 - 6/12/2021 23:59
This is what I tried so far:
# importing dependencies
import pandas as pd
import numpy as np
# importing data from SampleData.csv
df = pd.read_csv('data/SampleData.csv' ,converters={'clientId':str})
#converting timestamp string into timestap datatype
df["Time"] = pd.to_datetime(df["Timestamp"])
df.sort_values(["Timestamp", "clientId"], ascending = (True, True))
df["SameSession?"] = np.where((df['clientId'] == df['clientId'].shift(-1)) & (df['Session_ID'] == df['Session_ID'].shift(-1)), "YES", "NO")
minMax = df.groupby('clientId').agg(minTime=('Time', 'min'), maxTime=('Time', 'max'))
minMax['Diff'] = minMax['maxTime'] - minMax['minTime']
df = df.merge(minMax[['Diff']], on='clientId')
So I tried sorting everything by Time and ClientID so I can get same ClientId one after another. Than i tried comparing the row to row above for ClientID and Session. IF both are same, return Yes,if one is different than show NO. And then i did calculation between First and Last instace of ClientID. But i am getting wrong values when i compare rows to row above.
Here is the output
Timestamp clientId Session_ID Time SameSession? Diff
0 6/12/2021 15:05 27255667 ab89 2021-06-12 15:05:00 NO 0 days 00:00:00
1 6/12/2021 19:56 118698247 684a 2021-06-12 19:56:00 NO 0 days 00:00:00
2 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days 00:07:00
3 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days 00:07:00
4 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days 00:07:00
5 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 NO 0 days 00:07:00
6 6/13/2021 0:06 99492237 a5fd 2021-06-13 00:06:00 NO 0 days 00:07:00
7 6/13/2021 0:41 142584462 c23000 2021-06-13 00:41:00 YES 0 days 22:53:00
8 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days 22:53:00
9 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days 22:53:00
10 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days 22:53:00
11 6/13/2021 23:34 142584462 c23000 2021-06-13 23:34:00 YES 0 days 22:53:00
12 6/13/2021 23:34 142584462 c23000 2021-06-13 23:34:00 NO 0 days 22:53:00
13 6/13/2021 23:34 142584462 7d97 2021-06-13 23:34:00 NO 0 days 22:53:00
SO how do I need to adjust my code so i can show the new Session_ID for the same ClientId and show the time difference between new Session and previous session for the same client?
The output i expect to get should look like this:
Timestamp clientId Session_ID Time SameSession? Diff
0 6/12/2021 15:05 27255667 ab89 2021-06-12 15:05:00 NO 0 days
1 6/12/2021 19:56 118698247 684a 2021-06-12 19:56:00 NO 0 days
2 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days
3 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days
4 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days
5 6/12/2021 23:59 99492237 a4fd 2021-06-12 23:59:00 YES 0 days
6 6/13/2021 0:06 99492237 a5fd 2021-06-13 00:06:00 NO 0 days 00:07:00
7 6/13/2021 0:41 142584462 c23000 2021-06-13 00:41:00 NO 0 days
8 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days
9 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days
10 6/13/2021 23:33 142584462 c23000 2021-06-13 23:33:00 YES 0 days
11 6/13/2021 23:34 142584462 c23000 2021-06-13 23:34:00 YES 0 days
12 6/13/2021 23:34 142584462 c23000 2021-06-13 23:34:00 YES 0 days
13 6/13/2021 23:34 142584462 7d97 2021-06-13 23:34:00 NO 0 days 22:53:00
|
64,104,390
|
How to filter multiple rows with explicit values and select a column using pandas
|
<p><strong>Selecting Multiple Explicit Rows and Column to then Calculate Mean</strong></p>
<p>Hi Everyone thank you for taking a look at this question. I'm working with a pokémon data set and looking to select explicit rows and a column to then calculate the mean for that value.</p>
<p>The Columns worked with are <code>type_1</code> , <code>generation</code> and <code>total_points</code>.</p>
<p>The Row values are <code>Grass</code> and <code>1</code></p>
<p><code>Grass</code> corresponds to the <code>type</code> and <code>1</code> to the <code>generation</code>.</p>
<pre class="lang-py prettyprint-override"><code>grass_total_points = pokedex.loc[pokedex.type_1 == 'Grass', ['total_points']].mean()
</code></pre>
<p>This code above works and returns the total mean for all grass types across all 8 generations but I would like to retrieve them on a generation by generation basis.</p>
<pre class="lang-py prettyprint-override"><code>gen_1 = pokedex.loc[pokedex['generation'] == '1' & pokedex['type_1'] == 'Grass', ['total_points']].mean()
</code></pre>
<p>I attempted the code above, with no luck I searched around and cannot find any answers to this.</p>
| 64,104,779
| 2020-09-28T14:40:52.220000
| 1
| null | 1
| 86
|
python|pandas
|
<p>Jeremy, try the <code>groupby</code> method:</p>
<pre><code>import pandas as pd
pokedex = pd.DataFrame({'Type':['Grass','Grass','Grass','Sand','Sand'],
'Generation':[1,2,1,1,1],"TotalPoints":[50,10,20,30,40]})
pokedex.groupby(['Type','Generation'])['TotalPoints'].mean()
</code></pre>
<p>Should return:</p>
<pre><code>Type Generation
Grass 1 35
2 10
Sand 1 35
Name: TotalPoints, dtype: int64
</code></pre>
| 2020-09-28T15:04:55.510000
| 1
|
https://pandas.pydata.org/docs/user_guide/indexing.html
|
Indexing and selecting data#
Indexing and selecting data#
The axis labeling information in pandas objects serves many purposes:
Identifies data (i.e. provides metadata) using known indicators,
important for analysis, visualization, and interactive console display.
Enables automatic and explicit data alignment.
Allows intuitive getting and setting of subsets of the data set.
In this section, we will focus on the final point: namely, how to slice, dice,
and generally get and set subsets of pandas objects. The primary focus will be
on Series and DataFrame as they have received more development attention in
this area.
Note
The Python and NumPy indexing operators [] and attribute operator .
provide quick and easy access to pandas data structures across a wide range
of use cases. This makes interactive work intuitive, as there’s little new
Jeremy, try the groupby method:
import pandas as pd
pokedex = pd.DataFrame({'Type':['Grass','Grass','Grass','Sand','Sand'],
'Generation':[1,2,1,1,1],"TotalPoints":[50,10,20,30,40]})
pokedex.groupby(['Type','Generation'])['TotalPoints'].mean()
Should return:
Type Generation
Grass 1 35
2 10
Sand 1 35
Name: TotalPoints, dtype: int64
to learn if you already know how to deal with Python dictionaries and NumPy
arrays. However, since the type of the data to be accessed isn’t known in
advance, directly using standard operators has some optimization limits. For
production code, we recommended that you take advantage of the optimized
pandas data access methods exposed in this chapter.
Warning
Whether a copy or a reference is returned for a setting operation, may
depend on the context. This is sometimes called chained assignment and
should be avoided. See Returning a View versus Copy.
See the MultiIndex / Advanced Indexing for MultiIndex and more advanced indexing documentation.
See the cookbook for some advanced strategies.
Different choices for indexing#
Object selection has had a number of user-requested additions in order to
support more explicit location based indexing. pandas now supports three types
of multi-axis indexing.
.loc is primarily label based, but may also be used with a boolean array. .loc will raise KeyError when the items are not found. Allowed inputs are:
A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a
label of the index. This use is not an integer position along the
index.).
A list or array of labels ['a', 'b', 'c'].
A slice object with labels 'a':'f' (Note that contrary to usual Python
slices, both the start and the stop are included, when present in the
index! See Slicing with labels
and Endpoints are inclusive.)
A boolean array (any NA values will be treated as False).
A callable function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
See more at Selection by Label.
.iloc is primarily integer position based (from 0 to
length-1 of the axis), but may also be used with a boolean
array. .iloc will raise IndexError if a requested
indexer is out-of-bounds, except slice indexers which allow
out-of-bounds indexing. (this conforms with Python/NumPy slice
semantics). Allowed inputs are:
An integer e.g. 5.
A list or array of integers [4, 3, 0].
A slice object with ints 1:7.
A boolean array (any NA values will be treated as False).
A callable function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
See more at Selection by Position,
Advanced Indexing and Advanced
Hierarchical.
.loc, .iloc, and also [] indexing can accept a callable as indexer. See more at Selection By Callable.
Getting values from an object with multi-axes selection uses the following
notation (using .loc as an example, but the following applies to .iloc as
well). Any of the axes accessors may be the null slice :. Axes left out of
the specification are assumed to be :, e.g. p.loc['a'] is equivalent to
p.loc['a', :].
Object Type
Indexers
Series
s.loc[indexer]
DataFrame
df.loc[row_indexer,column_indexer]
Basics#
As mentioned when introducing the data structures in the last section, the primary function of indexing with [] (a.k.a. __getitem__
for those familiar with implementing class behavior in Python) is selecting out
lower-dimensional slices. The following table shows return type values when
indexing pandas objects with []:
Object Type
Selection
Return Value Type
Series
series[label]
scalar value
DataFrame
frame[colname]
Series corresponding to colname
Here we construct a simple time series data set to use for illustrating the
indexing functionality:
In [1]: dates = pd.date_range('1/1/2000', periods=8)
In [2]: df = pd.DataFrame(np.random.randn(8, 4),
...: index=dates, columns=['A', 'B', 'C', 'D'])
...:
In [3]: df
Out[3]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
Note
None of the indexing functionality is time series specific unless
specifically stated.
Thus, as per above, we have the most basic indexing using []:
In [4]: s = df['A']
In [5]: s[dates[5]]
Out[5]: -0.6736897080883706
You can pass a list of columns to [] to select columns in that order.
If a column is not contained in the DataFrame, an exception will be
raised. Multiple columns can also be set in this manner:
In [6]: df
Out[6]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
In [7]: df[['B', 'A']] = df[['A', 'B']]
In [8]: df
Out[8]:
A B C D
2000-01-01 -0.282863 0.469112 -1.509059 -1.135632
2000-01-02 -0.173215 1.212112 0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929 1.071804
2000-01-04 -0.706771 0.721555 -1.039575 0.271860
2000-01-05 0.567020 -0.424972 0.276232 -1.087401
2000-01-06 0.113648 -0.673690 -1.478427 0.524988
2000-01-07 0.577046 0.404705 -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312 0.844885
You may find this useful for applying a transform (in-place) to a subset of the
columns.
Warning
pandas aligns all AXES when setting Series and DataFrame from .loc, and .iloc.
This will not modify df because the column alignment is before value assignment.
In [9]: df[['A', 'B']]
Out[9]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647
In [10]: df.loc[:, ['B', 'A']] = df[['A', 'B']]
In [11]: df[['A', 'B']]
Out[11]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647
The correct way to swap column values is by using raw values:
In [12]: df.loc[:, ['B', 'A']] = df[['A', 'B']].to_numpy()
In [13]: df[['A', 'B']]
Out[13]:
A B
2000-01-01 0.469112 -0.282863
2000-01-02 1.212112 -0.173215
2000-01-03 -0.861849 -2.104569
2000-01-04 0.721555 -0.706771
2000-01-05 -0.424972 0.567020
2000-01-06 -0.673690 0.113648
2000-01-07 0.404705 0.577046
2000-01-08 -0.370647 -1.157892
Attribute access#
You may access an index on a Series or column on a DataFrame directly
as an attribute:
In [14]: sa = pd.Series([1, 2, 3], index=list('abc'))
In [15]: dfa = df.copy()
In [16]: sa.b
Out[16]: 2
In [17]: dfa.A
Out[17]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
In [18]: sa.a = 5
In [19]: sa
Out[19]:
a 5
b 2
c 3
dtype: int64
In [20]: dfa.A = list(range(len(dfa.index))) # ok if A already exists
In [21]: dfa
Out[21]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
In [22]: dfa['A'] = list(range(len(dfa.index))) # use this form to create a new column
In [23]: dfa
Out[23]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
Warning
You can use this access only if the index element is a valid Python identifier, e.g. s.1 is not allowed.
See here for an explanation of valid identifiers.
The attribute will not be available if it conflicts with an existing method name, e.g. s.min is not allowed, but s['min'] is possible.
Similarly, the attribute will not be available if it conflicts with any of the following list: index,
major_axis, minor_axis, items.
In any of these cases, standard indexing will still work, e.g. s['1'], s['min'], and s['index'] will
access the corresponding element or column.
If you are using the IPython environment, you may also use tab-completion to
see these accessible attributes.
You can also assign a dict to a row of a DataFrame:
In [24]: x = pd.DataFrame({'x': [1, 2, 3], 'y': [3, 4, 5]})
In [25]: x.iloc[1] = {'x': 9, 'y': 99}
In [26]: x
Out[26]:
x y
0 1 3
1 9 99
2 3 5
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful;
if you try to use attribute access to create a new column, it creates a new attribute rather than a
new column. In 0.21.0 and later, this will raise a UserWarning:
In [1]: df = pd.DataFrame({'one': [1., 2., 3.]})
In [2]: df.two = [4, 5, 6]
UserWarning: Pandas doesn't allow Series to be assigned into nonexistent columns - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute_access
In [3]: df
Out[3]:
one
0 1.0
1 2.0
2 3.0
Slicing ranges#
The most robust and consistent way of slicing ranges along arbitrary axes is
described in the Selection by Position section
detailing the .iloc method. For now, we explain the semantics of slicing using the [] operator.
With Series, the syntax works exactly as with an ndarray, returning a slice of
the values and the corresponding labels:
In [27]: s[:5]
Out[27]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
Freq: D, Name: A, dtype: float64
In [28]: s[::2]
Out[28]:
2000-01-01 0.469112
2000-01-03 -0.861849
2000-01-05 -0.424972
2000-01-07 0.404705
Freq: 2D, Name: A, dtype: float64
In [29]: s[::-1]
Out[29]:
2000-01-08 -0.370647
2000-01-07 0.404705
2000-01-06 -0.673690
2000-01-05 -0.424972
2000-01-04 0.721555
2000-01-03 -0.861849
2000-01-02 1.212112
2000-01-01 0.469112
Freq: -1D, Name: A, dtype: float64
Note that setting works as well:
In [30]: s2 = s.copy()
In [31]: s2[:5] = 0
In [32]: s2
Out[32]:
2000-01-01 0.000000
2000-01-02 0.000000
2000-01-03 0.000000
2000-01-04 0.000000
2000-01-05 0.000000
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
With DataFrame, slicing inside of [] slices the rows. This is provided
largely as a convenience since it is such a common operation.
In [33]: df[:3]
Out[33]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
In [34]: df[::-1]
Out[34]:
A B C D
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
Selection by label#
Warning
Whether a copy or a reference is returned for a setting operation, may depend on the context.
This is sometimes called chained assignment and should be avoided.
See Returning a View versus Copy.
Warning
.loc is strict when you present slicers that are not compatible (or convertible) with the index type. For example
using integers in a DatetimeIndex. These will raise a TypeError.
In [35]: dfl = pd.DataFrame(np.random.randn(5, 4),
....: columns=list('ABCD'),
....: index=pd.date_range('20130101', periods=5))
....:
In [36]: dfl
Out[36]:
A B C D
2013-01-01 1.075770 -0.109050 1.643563 -1.469388
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
2013-01-05 0.895717 0.805244 -1.206412 2.565646
In [4]: dfl.loc[2:3]
TypeError: cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex'> with these indexers [2] of <type 'int'>
String likes in slicing can be convertible to the type of the index and lead to natural slicing.
In [37]: dfl.loc['20130102':'20130104']
Out[37]:
A B C D
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
Warning
Changed in version 1.0.0.
pandas will raise a KeyError if indexing with a list with missing labels. See list-like Using loc with
missing keys in a list is Deprecated.
pandas provides a suite of methods in order to have purely label based indexing. This is a strict inclusion based protocol.
Every label asked for must be in the index, or a KeyError will be raised.
When slicing, both the start bound AND the stop bound are included, if present in the index.
Integers are valid labels, but they refer to the label and not the position.
The .loc attribute is the primary access method. The following are valid inputs:
A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a label of the index. This use is not an integer position along the index.).
A list or array of labels ['a', 'b', 'c'].
A slice object with labels 'a':'f' (Note that contrary to usual Python
slices, both the start and the stop are included, when present in the
index! See Slicing with labels.
A boolean array.
A callable, see Selection By Callable.
In [38]: s1 = pd.Series(np.random.randn(6), index=list('abcdef'))
In [39]: s1
Out[39]:
a 1.431256
b 1.340309
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [40]: s1.loc['c':]
Out[40]:
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [41]: s1.loc['b']
Out[41]: 1.3403088497993827
Note that setting works as well:
In [42]: s1.loc['c':] = 0
In [43]: s1
Out[43]:
a 1.431256
b 1.340309
c 0.000000
d 0.000000
e 0.000000
f 0.000000
dtype: float64
With a DataFrame:
In [44]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list('abcdef'),
....: columns=list('ABCD'))
....:
In [45]: df1
Out[45]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
c 1.024180 0.569605 0.875906 -2.211372
d 0.974466 -2.006747 -0.410001 -0.078638
e 0.545952 -1.219217 -1.226825 0.769804
f -1.281247 -0.727707 -0.121306 -0.097883
In [46]: df1.loc[['a', 'b', 'd'], :]
Out[46]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
d 0.974466 -2.006747 -0.410001 -0.078638
Accessing via label slices:
In [47]: df1.loc['d':, 'A':'C']
Out[47]:
A B C
d 0.974466 -2.006747 -0.410001
e 0.545952 -1.219217 -1.226825
f -1.281247 -0.727707 -0.121306
For getting a cross section using a label (equivalent to df.xs('a')):
In [48]: df1.loc['a']
Out[48]:
A 0.132003
B -0.827317
C -0.076467
D -1.187678
Name: a, dtype: float64
For getting values with a boolean array:
In [49]: df1.loc['a'] > 0
Out[49]:
A True
B False
C False
D False
Name: a, dtype: bool
In [50]: df1.loc[:, df1.loc['a'] > 0]
Out[50]:
A
a 0.132003
b 1.130127
c 1.024180
d 0.974466
e 0.545952
f -1.281247
NA values in a boolean array propagate as False:
Changed in version 1.0.2.
In [51]: mask = pd.array([True, False, True, False, pd.NA, False], dtype="boolean")
In [52]: mask
Out[52]:
<BooleanArray>
[True, False, True, False, <NA>, False]
Length: 6, dtype: boolean
In [53]: df1[mask]
Out[53]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
c 1.024180 0.569605 0.875906 -2.211372
For getting a value explicitly:
# this is also equivalent to ``df1.at['a','A']``
In [54]: df1.loc['a', 'A']
Out[54]: 0.13200317033032932
Slicing with labels#
When using .loc with slices, if both the start and the stop labels are
present in the index, then elements located between the two (including them)
are returned:
In [55]: s = pd.Series(list('abcde'), index=[0, 3, 2, 5, 4])
In [56]: s.loc[3:5]
Out[56]:
3 b
2 c
5 d
dtype: object
If at least one of the two is absent, but the index is sorted, and can be
compared against start and stop labels, then slicing will still work as
expected, by selecting labels which rank between the two:
In [57]: s.sort_index()
Out[57]:
0 a
2 c
3 b
4 e
5 d
dtype: object
In [58]: s.sort_index().loc[1:6]
Out[58]:
2 c
3 b
4 e
5 d
dtype: object
However, if at least one of the two is absent and the index is not sorted, an
error will be raised (since doing otherwise would be computationally expensive,
as well as potentially ambiguous for mixed type indexes). For instance, in the
above example, s.loc[1:6] would raise KeyError.
For the rationale behind this behavior, see
Endpoints are inclusive.
In [59]: s = pd.Series(list('abcdef'), index=[0, 3, 2, 5, 4, 2])
In [60]: s.loc[3:5]
Out[60]:
3 b
2 c
5 d
dtype: object
Also, if the index has duplicate labels and either the start or the stop label is duplicated,
an error will be raised. For instance, in the above example, s.loc[2:5] would raise a KeyError.
For more information about duplicate labels, see
Duplicate Labels.
Selection by position#
Warning
Whether a copy or a reference is returned for a setting operation, may depend on the context.
This is sometimes called chained assignment and should be avoided.
See Returning a View versus Copy.
pandas provides a suite of methods in order to get purely integer based indexing. The semantics follow closely Python and NumPy slicing. These are 0-based indexing. When slicing, the start bound is included, while the upper bound is excluded. Trying to use a non-integer, even a valid label will raise an IndexError.
The .iloc attribute is the primary access method. The following are valid inputs:
An integer e.g. 5.
A list or array of integers [4, 3, 0].
A slice object with ints 1:7.
A boolean array.
A callable, see Selection By Callable.
In [61]: s1 = pd.Series(np.random.randn(5), index=list(range(0, 10, 2)))
In [62]: s1
Out[62]:
0 0.695775
2 0.341734
4 0.959726
6 -1.110336
8 -0.619976
dtype: float64
In [63]: s1.iloc[:3]
Out[63]:
0 0.695775
2 0.341734
4 0.959726
dtype: float64
In [64]: s1.iloc[3]
Out[64]: -1.110336102891167
Note that setting works as well:
In [65]: s1.iloc[:3] = 0
In [66]: s1
Out[66]:
0 0.000000
2 0.000000
4 0.000000
6 -1.110336
8 -0.619976
dtype: float64
With a DataFrame:
In [67]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list(range(0, 12, 2)),
....: columns=list(range(0, 8, 2)))
....:
In [68]: df1
Out[68]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
6 -0.826591 -0.345352 1.314232 0.690579
8 0.995761 2.396780 0.014871 3.357427
10 -0.317441 -1.236269 0.896171 -0.487602
Select via integer slicing:
In [69]: df1.iloc[:3]
Out[69]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
In [70]: df1.iloc[1:5, 2:4]
Out[70]:
4 6
2 0.301624 -2.179861
4 1.462696 -1.743161
6 1.314232 0.690579
8 0.014871 3.357427
Select via integer list:
In [71]: df1.iloc[[1, 3, 5], [1, 3]]
Out[71]:
2 6
2 -0.154951 -2.179861
6 -0.345352 0.690579
10 -1.236269 -0.487602
In [72]: df1.iloc[1:3, :]
Out[72]:
0 2 4 6
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
In [73]: df1.iloc[:, 1:3]
Out[73]:
2 4
0 -0.732339 0.687738
2 -0.154951 0.301624
4 -0.954208 1.462696
6 -0.345352 1.314232
8 2.396780 0.014871
10 -1.236269 0.896171
# this is also equivalent to ``df1.iat[1,1]``
In [74]: df1.iloc[1, 1]
Out[74]: -0.1549507744249032
For getting a cross section using an integer position (equiv to df.xs(1)):
In [75]: df1.iloc[1]
Out[75]:
0 0.403310
2 -0.154951
4 0.301624
6 -2.179861
Name: 2, dtype: float64
Out of range slice indexes are handled gracefully just as in Python/NumPy.
# these are allowed in Python/NumPy.
In [76]: x = list('abcdef')
In [77]: x
Out[77]: ['a', 'b', 'c', 'd', 'e', 'f']
In [78]: x[4:10]
Out[78]: ['e', 'f']
In [79]: x[8:10]
Out[79]: []
In [80]: s = pd.Series(x)
In [81]: s
Out[81]:
0 a
1 b
2 c
3 d
4 e
5 f
dtype: object
In [82]: s.iloc[4:10]
Out[82]:
4 e
5 f
dtype: object
In [83]: s.iloc[8:10]
Out[83]: Series([], dtype: object)
Note that using slices that go out of bounds can result in
an empty axis (e.g. an empty DataFrame being returned).
In [84]: dfl = pd.DataFrame(np.random.randn(5, 2), columns=list('AB'))
In [85]: dfl
Out[85]:
A B
0 -0.082240 -2.182937
1 0.380396 0.084844
2 0.432390 1.519970
3 -0.493662 0.600178
4 0.274230 0.132885
In [86]: dfl.iloc[:, 2:3]
Out[86]:
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4]
In [87]: dfl.iloc[:, 1:3]
Out[87]:
B
0 -2.182937
1 0.084844
2 1.519970
3 0.600178
4 0.132885
In [88]: dfl.iloc[4:6]
Out[88]:
A B
4 0.27423 0.132885
A single indexer that is out of bounds will raise an IndexError.
A list of indexers where any element is out of bounds will raise an
IndexError.
>>> dfl.iloc[[4, 5, 6]]
IndexError: positional indexers are out-of-bounds
>>> dfl.iloc[:, 4]
IndexError: single positional indexer is out-of-bounds
Selection by callable#
.loc, .iloc, and also [] indexing can accept a callable as indexer.
The callable must be a function with one argument (the calling Series or DataFrame) that returns valid output for indexing.
In [89]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list('abcdef'),
....: columns=list('ABCD'))
....:
In [90]: df1
Out[90]:
A B C D
a -0.023688 2.410179 1.450520 0.206053
b -0.251905 -2.213588 1.063327 1.266143
c 0.299368 -0.863838 0.408204 -1.048089
d -0.025747 -0.988387 0.094055 1.262731
e 1.289997 0.082423 -0.055758 0.536580
f -0.489682 0.369374 -0.034571 -2.484478
In [91]: df1.loc[lambda df: df['A'] > 0, :]
Out[91]:
A B C D
c 0.299368 -0.863838 0.408204 -1.048089
e 1.289997 0.082423 -0.055758 0.536580
In [92]: df1.loc[:, lambda df: ['A', 'B']]
Out[92]:
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
In [93]: df1.iloc[:, lambda df: [0, 1]]
Out[93]:
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
In [94]: df1[lambda df: df.columns[0]]
Out[94]:
a -0.023688
b -0.251905
c 0.299368
d -0.025747
e 1.289997
f -0.489682
Name: A, dtype: float64
You can use callable indexing in Series.
In [95]: df1['A'].loc[lambda s: s > 0]
Out[95]:
c 0.299368
e 1.289997
Name: A, dtype: float64
Using these methods / indexers, you can chain data selection operations
without using a temporary variable.
In [96]: bb = pd.read_csv('data/baseball.csv', index_col='id')
In [97]: (bb.groupby(['year', 'team']).sum(numeric_only=True)
....: .loc[lambda df: df['r'] > 100])
....:
Out[97]:
stint g ab r h X2b ... so ibb hbp sh sf gidp
year team ...
2007 CIN 6 379 745 101 203 35 ... 127.0 14.0 1.0 1.0 15.0 18.0
DET 5 301 1062 162 283 54 ... 176.0 3.0 10.0 4.0 8.0 28.0
HOU 4 311 926 109 218 47 ... 212.0 3.0 9.0 16.0 6.0 17.0
LAN 11 413 1021 153 293 61 ... 141.0 8.0 9.0 3.0 8.0 29.0
NYN 13 622 1854 240 509 101 ... 310.0 24.0 23.0 18.0 15.0 48.0
SFN 5 482 1305 198 337 67 ... 188.0 51.0 8.0 16.0 6.0 41.0
TEX 2 198 729 115 200 40 ... 140.0 4.0 5.0 2.0 8.0 16.0
TOR 4 459 1408 187 378 96 ... 265.0 16.0 12.0 4.0 16.0 38.0
[8 rows x 18 columns]
Combining positional and label-based indexing#
If you wish to get the 0th and the 2nd elements from the index in the ‘A’ column, you can do:
In [98]: dfd = pd.DataFrame({'A': [1, 2, 3],
....: 'B': [4, 5, 6]},
....: index=list('abc'))
....:
In [99]: dfd
Out[99]:
A B
a 1 4
b 2 5
c 3 6
In [100]: dfd.loc[dfd.index[[0, 2]], 'A']
Out[100]:
a 1
c 3
Name: A, dtype: int64
This can also be expressed using .iloc, by explicitly getting locations on the indexers, and using
positional indexing to select things.
In [101]: dfd.iloc[[0, 2], dfd.columns.get_loc('A')]
Out[101]:
a 1
c 3
Name: A, dtype: int64
For getting multiple indexers, using .get_indexer:
In [102]: dfd.iloc[[0, 2], dfd.columns.get_indexer(['A', 'B'])]
Out[102]:
A B
a 1 4
c 3 6
Indexing with list with missing labels is deprecated#
Warning
Changed in version 1.0.0.
Using .loc or [] with a list with one or more missing labels will no longer reindex, in favor of .reindex.
In prior versions, using .loc[list-of-labels] would work as long as at least 1 of the keys was found (otherwise it
would raise a KeyError). This behavior was changed and will now raise a KeyError if at least one label is missing.
The recommended alternative is to use .reindex().
For example.
In [103]: s = pd.Series([1, 2, 3])
In [104]: s
Out[104]:
0 1
1 2
2 3
dtype: int64
Selection with all keys found is unchanged.
In [105]: s.loc[[1, 2]]
Out[105]:
1 2
2 3
dtype: int64
Previous behavior
In [4]: s.loc[[1, 2, 3]]
Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64
Current behavior
In [4]: s.loc[[1, 2, 3]]
Passing list-likes to .loc with any non-matching elements will raise
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike
Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64
Reindexing#
The idiomatic way to achieve selecting potentially not-found elements is via .reindex(). See also the section on reindexing.
In [106]: s.reindex([1, 2, 3])
Out[106]:
1 2.0
2 3.0
3 NaN
dtype: float64
Alternatively, if you want to select only valid keys, the following is idiomatic and efficient; it is guaranteed to preserve the dtype of the selection.
In [107]: labels = [1, 2, 3]
In [108]: s.loc[s.index.intersection(labels)]
Out[108]:
1 2
2 3
dtype: int64
Having a duplicated index will raise for a .reindex():
In [109]: s = pd.Series(np.arange(4), index=['a', 'a', 'b', 'c'])
In [110]: labels = ['c', 'd']
In [17]: s.reindex(labels)
ValueError: cannot reindex on an axis with duplicate labels
Generally, you can intersect the desired labels with the current
axis, and then reindex.
In [111]: s.loc[s.index.intersection(labels)].reindex(labels)
Out[111]:
c 3.0
d NaN
dtype: float64
However, this would still raise if your resulting index is duplicated.
In [41]: labels = ['a', 'd']
In [42]: s.loc[s.index.intersection(labels)].reindex(labels)
ValueError: cannot reindex on an axis with duplicate labels
Selecting random samples#
A random selection of rows or columns from a Series or DataFrame with the sample() method. The method will sample rows by default, and accepts a specific number of rows/columns to return, or a fraction of rows.
In [112]: s = pd.Series([0, 1, 2, 3, 4, 5])
# When no arguments are passed, returns 1 row.
In [113]: s.sample()
Out[113]:
4 4
dtype: int64
# One may specify either a number of rows:
In [114]: s.sample(n=3)
Out[114]:
0 0
4 4
1 1
dtype: int64
# Or a fraction of the rows:
In [115]: s.sample(frac=0.5)
Out[115]:
5 5
3 3
1 1
dtype: int64
By default, sample will return each row at most once, but one can also sample with replacement
using the replace option:
In [116]: s = pd.Series([0, 1, 2, 3, 4, 5])
# Without replacement (default):
In [117]: s.sample(n=6, replace=False)
Out[117]:
0 0
1 1
5 5
3 3
2 2
4 4
dtype: int64
# With replacement:
In [118]: s.sample(n=6, replace=True)
Out[118]:
0 0
4 4
3 3
2 2
4 4
4 4
dtype: int64
By default, each row has an equal probability of being selected, but if you want rows
to have different probabilities, you can pass the sample function sampling weights as
weights. These weights can be a list, a NumPy array, or a Series, but they must be of the same length as the object you are sampling. Missing values will be treated as a weight of zero, and inf values are not allowed. If weights do not sum to 1, they will be re-normalized by dividing all weights by the sum of the weights. For example:
In [119]: s = pd.Series([0, 1, 2, 3, 4, 5])
In [120]: example_weights = [0, 0, 0.2, 0.2, 0.2, 0.4]
In [121]: s.sample(n=3, weights=example_weights)
Out[121]:
5 5
4 4
3 3
dtype: int64
# Weights will be re-normalized automatically
In [122]: example_weights2 = [0.5, 0, 0, 0, 0, 0]
In [123]: s.sample(n=1, weights=example_weights2)
Out[123]:
0 0
dtype: int64
When applied to a DataFrame, you can use a column of the DataFrame as sampling weights
(provided you are sampling rows and not columns) by simply passing the name of the column
as a string.
In [124]: df2 = pd.DataFrame({'col1': [9, 8, 7, 6],
.....: 'weight_column': [0.5, 0.4, 0.1, 0]})
.....:
In [125]: df2.sample(n=3, weights='weight_column')
Out[125]:
col1 weight_column
1 8 0.4
0 9 0.5
2 7 0.1
sample also allows users to sample columns instead of rows using the axis argument.
In [126]: df3 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})
In [127]: df3.sample(n=1, axis=1)
Out[127]:
col1
0 1
1 2
2 3
Finally, one can also set a seed for sample’s random number generator using the random_state argument, which will accept either an integer (as a seed) or a NumPy RandomState object.
In [128]: df4 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})
# With a given seed, the sample will always draw the same rows.
In [129]: df4.sample(n=2, random_state=2)
Out[129]:
col1 col2
2 3 4
1 2 3
In [130]: df4.sample(n=2, random_state=2)
Out[130]:
col1 col2
2 3 4
1 2 3
Setting with enlargement#
The .loc/[] operations can perform enlargement when setting a non-existent key for that axis.
In the Series case this is effectively an appending operation.
In [131]: se = pd.Series([1, 2, 3])
In [132]: se
Out[132]:
0 1
1 2
2 3
dtype: int64
In [133]: se[5] = 5.
In [134]: se
Out[134]:
0 1.0
1 2.0
2 3.0
5 5.0
dtype: float64
A DataFrame can be enlarged on either axis via .loc.
In [135]: dfi = pd.DataFrame(np.arange(6).reshape(3, 2),
.....: columns=['A', 'B'])
.....:
In [136]: dfi
Out[136]:
A B
0 0 1
1 2 3
2 4 5
In [137]: dfi.loc[:, 'C'] = dfi.loc[:, 'A']
In [138]: dfi
Out[138]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
This is like an append operation on the DataFrame.
In [139]: dfi.loc[3] = 5
In [140]: dfi
Out[140]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
3 5 5 5
Fast scalar value getting and setting#
Since indexing with [] must handle a lot of cases (single-label access,
slicing, boolean indexing, etc.), it has a bit of overhead in order to figure
out what you’re asking for. If you only want to access a scalar value, the
fastest way is to use the at and iat methods, which are implemented on
all of the data structures.
Similarly to loc, at provides label based scalar lookups, while, iat provides integer based lookups analogously to iloc
In [141]: s.iat[5]
Out[141]: 5
In [142]: df.at[dates[5], 'A']
Out[142]: -0.6736897080883706
In [143]: df.iat[3, 0]
Out[143]: 0.7215551622443669
You can also set using these same indexers.
In [144]: df.at[dates[5], 'E'] = 7
In [145]: df.iat[3, 0] = 7
at may enlarge the object in-place as above if the indexer is missing.
In [146]: df.at[dates[-1] + pd.Timedelta('1 day'), 0] = 7
In [147]: df
Out[147]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-05 -0.424972 0.567020 0.276232 -1.087401 NaN NaN
2000-01-06 -0.673690 0.113648 -1.478427 0.524988 7.0 NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885 NaN NaN
2000-01-09 NaN NaN NaN NaN NaN 7.0
Boolean indexing#
Another common operation is the use of boolean vectors to filter the data.
The operators are: | for or, & for and, and ~ for not.
These must be grouped by using parentheses, since by default Python will
evaluate an expression such as df['A'] > 2 & df['B'] < 3 as
df['A'] > (2 & df['B']) < 3, while the desired evaluation order is
(df['A'] > 2) & (df['B'] < 3).
Using a boolean vector to index a Series works exactly as in a NumPy ndarray:
In [148]: s = pd.Series(range(-3, 4))
In [149]: s
Out[149]:
0 -3
1 -2
2 -1
3 0
4 1
5 2
6 3
dtype: int64
In [150]: s[s > 0]
Out[150]:
4 1
5 2
6 3
dtype: int64
In [151]: s[(s < -1) | (s > 0.5)]
Out[151]:
0 -3
1 -2
4 1
5 2
6 3
dtype: int64
In [152]: s[~(s < 0)]
Out[152]:
3 0
4 1
5 2
6 3
dtype: int64
You may select rows from a DataFrame using a boolean vector the same length as
the DataFrame’s index (for example, something derived from one of the columns
of the DataFrame):
In [153]: df[df['A'] > 0]
Out[153]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN
List comprehensions and the map method of Series can also be used to produce
more complex criteria:
In [154]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
.....: 'b': ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
.....: 'c': np.random.randn(7)})
.....:
# only want 'two' or 'three'
In [155]: criterion = df2['a'].map(lambda x: x.startswith('t'))
In [156]: df2[criterion]
Out[156]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
# equivalent but slower
In [157]: df2[[x.startswith('t') for x in df2['a']]]
Out[157]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
# Multiple criteria
In [158]: df2[criterion & (df2['b'] == 'x')]
Out[158]:
a b c
3 three x 0.361719
With the choice methods Selection by Label, Selection by Position,
and Advanced Indexing you may select along more than one axis using boolean vectors combined with other indexing expressions.
In [159]: df2.loc[criterion & (df2['b'] == 'x'), 'b':'c']
Out[159]:
b c
3 x 0.361719
Warning
iloc supports two kinds of boolean indexing. If the indexer is a boolean Series,
an error will be raised. For instance, in the following example, df.iloc[s.values, 1] is ok.
The boolean indexer is an array. But df.iloc[s, 1] would raise ValueError.
In [160]: df = pd.DataFrame([[1, 2], [3, 4], [5, 6]],
.....: index=list('abc'),
.....: columns=['A', 'B'])
.....:
In [161]: s = (df['A'] > 2)
In [162]: s
Out[162]:
a False
b True
c True
Name: A, dtype: bool
In [163]: df.loc[s, 'B']
Out[163]:
b 4
c 6
Name: B, dtype: int64
In [164]: df.iloc[s.values, 1]
Out[164]:
b 4
c 6
Name: B, dtype: int64
Indexing with isin#
Consider the isin() method of Series, which returns a boolean
vector that is true wherever the Series elements exist in the passed list.
This allows you to select rows where one or more columns have values you want:
In [165]: s = pd.Series(np.arange(5), index=np.arange(5)[::-1], dtype='int64')
In [166]: s
Out[166]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
In [167]: s.isin([2, 4, 6])
Out[167]:
4 False
3 False
2 True
1 False
0 True
dtype: bool
In [168]: s[s.isin([2, 4, 6])]
Out[168]:
2 2
0 4
dtype: int64
The same method is available for Index objects and is useful for the cases
when you don’t know which of the sought labels are in fact present:
In [169]: s[s.index.isin([2, 4, 6])]
Out[169]:
4 0
2 2
dtype: int64
# compare it to the following
In [170]: s.reindex([2, 4, 6])
Out[170]:
2 2.0
4 0.0
6 NaN
dtype: float64
In addition to that, MultiIndex allows selecting a separate level to use
in the membership check:
In [171]: s_mi = pd.Series(np.arange(6),
.....: index=pd.MultiIndex.from_product([[0, 1], ['a', 'b', 'c']]))
.....:
In [172]: s_mi
Out[172]:
0 a 0
b 1
c 2
1 a 3
b 4
c 5
dtype: int64
In [173]: s_mi.iloc[s_mi.index.isin([(1, 'a'), (2, 'b'), (0, 'c')])]
Out[173]:
0 c 2
1 a 3
dtype: int64
In [174]: s_mi.iloc[s_mi.index.isin(['a', 'c', 'e'], level=1)]
Out[174]:
0 a 0
c 2
1 a 3
c 5
dtype: int64
DataFrame also has an isin() method. When calling isin, pass a set of
values as either an array or dict. If values is an array, isin returns
a DataFrame of booleans that is the same shape as the original DataFrame, with True
wherever the element is in the sequence of values.
In [175]: df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'],
.....: 'ids2': ['a', 'n', 'c', 'n']})
.....:
In [176]: values = ['a', 'b', 1, 3]
In [177]: df.isin(values)
Out[177]:
vals ids ids2
0 True True True
1 False True False
2 True False False
3 False False False
Oftentimes you’ll want to match certain values with certain columns.
Just make values a dict where the key is the column, and the value is
a list of items you want to check for.
In [178]: values = {'ids': ['a', 'b'], 'vals': [1, 3]}
In [179]: df.isin(values)
Out[179]:
vals ids ids2
0 True True False
1 False True False
2 True False False
3 False False False
To return the DataFrame of booleans where the values are not in the original DataFrame,
use the ~ operator:
In [180]: values = {'ids': ['a', 'b'], 'vals': [1, 3]}
In [181]: ~df.isin(values)
Out[181]:
vals ids ids2
0 False False True
1 True False True
2 False True True
3 True True True
Combine DataFrame’s isin with the any() and all() methods to
quickly select subsets of your data that meet a given criteria.
To select a row where each column meets its own criterion:
In [182]: values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
In [183]: row_mask = df.isin(values).all(1)
In [184]: df[row_mask]
Out[184]:
vals ids ids2
0 1 a a
The where() Method and Masking#
Selecting values from a Series with a boolean vector generally returns a
subset of the data. To guarantee that selection output has the same shape as
the original data, you can use the where method in Series and DataFrame.
To return only the selected rows:
In [185]: s[s > 0]
Out[185]:
3 1
2 2
1 3
0 4
dtype: int64
To return a Series of the same shape as the original:
In [186]: s.where(s > 0)
Out[186]:
4 NaN
3 1.0
2 2.0
1 3.0
0 4.0
dtype: float64
Selecting values from a DataFrame with a boolean criterion now also preserves
input data shape. where is used under the hood as the implementation.
The code below is equivalent to df.where(df < 0).
In [187]: df[df < 0]
Out[187]:
A B C D
2000-01-01 -2.104139 -1.309525 NaN NaN
2000-01-02 -0.352480 NaN -1.192319 NaN
2000-01-03 -0.864883 NaN -0.227870 NaN
2000-01-04 NaN -1.222082 NaN -1.233203
2000-01-05 NaN -0.605656 -1.169184 NaN
2000-01-06 NaN -0.948458 NaN -0.684718
2000-01-07 -2.670153 -0.114722 NaN -0.048048
2000-01-08 NaN NaN -0.048788 -0.808838
In addition, where takes an optional other argument for replacement of
values where the condition is False, in the returned copy.
In [188]: df.where(df < 0, -df)
Out[188]:
A B C D
2000-01-01 -2.104139 -1.309525 -0.485855 -0.245166
2000-01-02 -0.352480 -0.390389 -1.192319 -1.655824
2000-01-03 -0.864883 -0.299674 -0.227870 -0.281059
2000-01-04 -0.846958 -1.222082 -0.600705 -1.233203
2000-01-05 -0.669692 -0.605656 -1.169184 -0.342416
2000-01-06 -0.868584 -0.948458 -2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 -0.168904 -0.048048
2000-01-08 -0.801196 -1.392071 -0.048788 -0.808838
You may wish to set values based on some boolean criteria.
This can be done intuitively like so:
In [189]: s2 = s.copy()
In [190]: s2[s2 < 0] = 0
In [191]: s2
Out[191]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
In [192]: df2 = df.copy()
In [193]: df2[df2 < 0] = 0
In [194]: df2
Out[194]:
A B C D
2000-01-01 0.000000 0.000000 0.485855 0.245166
2000-01-02 0.000000 0.390389 0.000000 1.655824
2000-01-03 0.000000 0.299674 0.000000 0.281059
2000-01-04 0.846958 0.000000 0.600705 0.000000
2000-01-05 0.669692 0.000000 0.000000 0.342416
2000-01-06 0.868584 0.000000 2.297780 0.000000
2000-01-07 0.000000 0.000000 0.168904 0.000000
2000-01-08 0.801196 1.392071 0.000000 0.000000
By default, where returns a modified copy of the data. There is an
optional parameter inplace so that the original data can be modified
without creating a copy:
In [195]: df_orig = df.copy()
In [196]: df_orig.where(df > 0, -df, inplace=True)
In [197]: df_orig
Out[197]:
A B C D
2000-01-01 2.104139 1.309525 0.485855 0.245166
2000-01-02 0.352480 0.390389 1.192319 1.655824
2000-01-03 0.864883 0.299674 0.227870 0.281059
2000-01-04 0.846958 1.222082 0.600705 1.233203
2000-01-05 0.669692 0.605656 1.169184 0.342416
2000-01-06 0.868584 0.948458 2.297780 0.684718
2000-01-07 2.670153 0.114722 0.168904 0.048048
2000-01-08 0.801196 1.392071 0.048788 0.808838
Note
The signature for DataFrame.where() differs from numpy.where().
Roughly df1.where(m, df2) is equivalent to np.where(m, df1, df2).
In [198]: df.where(df < 0, -df) == np.where(df < 0, df, -df)
Out[198]:
A B C D
2000-01-01 True True True True
2000-01-02 True True True True
2000-01-03 True True True True
2000-01-04 True True True True
2000-01-05 True True True True
2000-01-06 True True True True
2000-01-07 True True True True
2000-01-08 True True True True
Alignment
Furthermore, where aligns the input boolean condition (ndarray or DataFrame),
such that partial selection with setting is possible. This is analogous to
partial setting via .loc (but on the contents rather than the axis labels).
In [199]: df2 = df.copy()
In [200]: df2[df2[1:4] > 0] = 3
In [201]: df2
Out[201]:
A B C D
2000-01-01 -2.104139 -1.309525 0.485855 0.245166
2000-01-02 -0.352480 3.000000 -1.192319 3.000000
2000-01-03 -0.864883 3.000000 -0.227870 3.000000
2000-01-04 3.000000 -1.222082 3.000000 -1.233203
2000-01-05 0.669692 -0.605656 -1.169184 0.342416
2000-01-06 0.868584 -0.948458 2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 0.168904 -0.048048
2000-01-08 0.801196 1.392071 -0.048788 -0.808838
Where can also accept axis and level parameters to align the input when
performing the where.
In [202]: df2 = df.copy()
In [203]: df2.where(df2 > 0, df2['A'], axis='index')
Out[203]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196
This is equivalent to (but faster than) the following.
In [204]: df2 = df.copy()
In [205]: df.apply(lambda x, y: x.where(x > 0, y), y=df['A'])
Out[205]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196
where can accept a callable as condition and other arguments. The function must
be with one argument (the calling Series or DataFrame) and that returns valid output
as condition and other argument.
In [206]: df3 = pd.DataFrame({'A': [1, 2, 3],
.....: 'B': [4, 5, 6],
.....: 'C': [7, 8, 9]})
.....:
In [207]: df3.where(lambda x: x > 4, lambda x: x + 10)
Out[207]:
A B C
0 11 14 7
1 12 5 8
2 13 6 9
Mask#
mask() is the inverse boolean operation of where.
In [208]: s.mask(s >= 0)
Out[208]:
4 NaN
3 NaN
2 NaN
1 NaN
0 NaN
dtype: float64
In [209]: df.mask(df >= 0)
Out[209]:
A B C D
2000-01-01 -2.104139 -1.309525 NaN NaN
2000-01-02 -0.352480 NaN -1.192319 NaN
2000-01-03 -0.864883 NaN -0.227870 NaN
2000-01-04 NaN -1.222082 NaN -1.233203
2000-01-05 NaN -0.605656 -1.169184 NaN
2000-01-06 NaN -0.948458 NaN -0.684718
2000-01-07 -2.670153 -0.114722 NaN -0.048048
2000-01-08 NaN NaN -0.048788 -0.808838
Setting with enlargement conditionally using numpy()#
An alternative to where() is to use numpy.where().
Combined with setting a new column, you can use it to enlarge a DataFrame where the
values are determined conditionally.
Consider you have two choices to choose from in the following DataFrame. And you want to
set a new column color to ‘green’ when the second column has ‘Z’. You can do the
following:
In [210]: df = pd.DataFrame({'col1': list('ABBC'), 'col2': list('ZZXY')})
In [211]: df['color'] = np.where(df['col2'] == 'Z', 'green', 'red')
In [212]: df
Out[212]:
col1 col2 color
0 A Z green
1 B Z green
2 B X red
3 C Y red
If you have multiple conditions, you can use numpy.select() to achieve that. Say
corresponding to three conditions there are three choice of colors, with a fourth color
as a fallback, you can do the following.
In [213]: conditions = [
.....: (df['col2'] == 'Z') & (df['col1'] == 'A'),
.....: (df['col2'] == 'Z') & (df['col1'] == 'B'),
.....: (df['col1'] == 'B')
.....: ]
.....:
In [214]: choices = ['yellow', 'blue', 'purple']
In [215]: df['color'] = np.select(conditions, choices, default='black')
In [216]: df
Out[216]:
col1 col2 color
0 A Z yellow
1 B Z blue
2 B X purple
3 C Y black
The query() Method#
DataFrame objects have a query()
method that allows selection using an expression.
You can get the value of the frame where column b has values
between the values of columns a and c. For example:
In [217]: n = 10
In [218]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [219]: df
Out[219]:
a b c
0 0.438921 0.118680 0.863670
1 0.138138 0.577363 0.686602
2 0.595307 0.564592 0.520630
3 0.913052 0.926075 0.616184
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
6 0.792342 0.216974 0.564056
7 0.397890 0.454131 0.915716
8 0.074315 0.437913 0.019794
9 0.559209 0.502065 0.026437
# pure python
In [220]: df[(df['a'] < df['b']) & (df['b'] < df['c'])]
Out[220]:
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
# query
In [221]: df.query('(a < b) & (b < c)')
Out[221]:
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
Do the same thing but fall back on a named index if there is no column
with the name a.
In [222]: df = pd.DataFrame(np.random.randint(n / 2, size=(n, 2)), columns=list('bc'))
In [223]: df.index.name = 'a'
In [224]: df
Out[224]:
b c
a
0 0 4
1 0 1
2 3 4
3 4 3
4 1 4
5 0 3
6 0 1
7 3 4
8 2 3
9 1 1
In [225]: df.query('a < b and b < c')
Out[225]:
b c
a
2 3 4
If instead you don’t want to or cannot name your index, you can use the name
index in your query expression:
In [226]: df = pd.DataFrame(np.random.randint(n, size=(n, 2)), columns=list('bc'))
In [227]: df
Out[227]:
b c
0 3 1
1 3 0
2 5 6
3 5 2
4 7 4
5 0 1
6 2 5
7 0 1
8 6 0
9 7 9
In [228]: df.query('index < b < c')
Out[228]:
b c
2 5 6
Note
If the name of your index overlaps with a column name, the column name is
given precedence. For example,
In [229]: df = pd.DataFrame({'a': np.random.randint(5, size=5)})
In [230]: df.index.name = 'a'
In [231]: df.query('a > 2') # uses the column 'a', not the index
Out[231]:
a
a
1 3
3 3
You can still use the index in a query expression by using the special
identifier ‘index’:
In [232]: df.query('index > 2')
Out[232]:
a
a
3 3
4 2
If for some reason you have a column named index, then you can refer to
the index as ilevel_0 as well, but at this point you should consider
renaming your columns to something less ambiguous.
MultiIndex query() Syntax#
You can also use the levels of a DataFrame with a
MultiIndex as if they were columns in the frame:
In [233]: n = 10
In [234]: colors = np.random.choice(['red', 'green'], size=n)
In [235]: foods = np.random.choice(['eggs', 'ham'], size=n)
In [236]: colors
Out[236]:
array(['red', 'red', 'red', 'green', 'green', 'green', 'green', 'green',
'green', 'green'], dtype='<U5')
In [237]: foods
Out[237]:
array(['ham', 'ham', 'eggs', 'eggs', 'eggs', 'ham', 'ham', 'eggs', 'eggs',
'eggs'], dtype='<U4')
In [238]: index = pd.MultiIndex.from_arrays([colors, foods], names=['color', 'food'])
In [239]: df = pd.DataFrame(np.random.randn(n, 2), index=index)
In [240]: df
Out[240]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
In [241]: df.query('color == "red"')
Out[241]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
If the levels of the MultiIndex are unnamed, you can refer to them using
special names:
In [242]: df.index.names = [None, None]
In [243]: df
Out[243]:
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
In [244]: df.query('ilevel_0 == "red"')
Out[244]:
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
The convention is ilevel_0, which means “index level 0” for the 0th level
of the index.
query() Use Cases#
A use case for query() is when you have a collection of
DataFrame objects that have a subset of column names (or index
levels/names) in common. You can pass the same query to both frames without
having to specify which frame you’re interested in querying
In [245]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [246]: df
Out[246]:
a b c
0 0.224283 0.736107 0.139168
1 0.302827 0.657803 0.713897
2 0.611185 0.136624 0.984960
3 0.195246 0.123436 0.627712
4 0.618673 0.371660 0.047902
5 0.480088 0.062993 0.185760
6 0.568018 0.483467 0.445289
7 0.309040 0.274580 0.587101
8 0.258993 0.477769 0.370255
9 0.550459 0.840870 0.304611
In [247]: df2 = pd.DataFrame(np.random.rand(n + 2, 3), columns=df.columns)
In [248]: df2
Out[248]:
a b c
0 0.357579 0.229800 0.596001
1 0.309059 0.957923 0.965663
2 0.123102 0.336914 0.318616
3 0.526506 0.323321 0.860813
4 0.518736 0.486514 0.384724
5 0.190804 0.505723 0.614533
6 0.891939 0.623977 0.676639
7 0.480559 0.378528 0.460858
8 0.420223 0.136404 0.141295
9 0.732206 0.419540 0.604675
10 0.604466 0.848974 0.896165
11 0.589168 0.920046 0.732716
In [249]: expr = '0.0 <= a <= c <= 0.5'
In [250]: map(lambda frame: frame.query(expr), [df, df2])
Out[250]: <map at 0x7f1ea0d8e580>
query() Python versus pandas Syntax Comparison#
Full numpy-like syntax:
In [251]: df = pd.DataFrame(np.random.randint(n, size=(n, 3)), columns=list('abc'))
In [252]: df
Out[252]:
a b c
0 7 8 9
1 1 0 7
2 2 7 2
3 6 2 2
4 2 6 3
5 3 8 2
6 1 7 2
7 5 1 5
8 9 8 0
9 1 5 0
In [253]: df.query('(a < b) & (b < c)')
Out[253]:
a b c
0 7 8 9
In [254]: df[(df['a'] < df['b']) & (df['b'] < df['c'])]
Out[254]:
a b c
0 7 8 9
Slightly nicer by removing the parentheses (comparison operators bind tighter
than & and |):
In [255]: df.query('a < b & b < c')
Out[255]:
a b c
0 7 8 9
Use English instead of symbols:
In [256]: df.query('a < b and b < c')
Out[256]:
a b c
0 7 8 9
Pretty close to how you might write it on paper:
In [257]: df.query('a < b < c')
Out[257]:
a b c
0 7 8 9
The in and not in operators#
query() also supports special use of Python’s in and
not in comparison operators, providing a succinct syntax for calling the
isin method of a Series or DataFrame.
# get all rows where columns "a" and "b" have overlapping values
In [258]: df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
.....: 'c': np.random.randint(5, size=12),
.....: 'd': np.random.randint(9, size=12)})
.....:
In [259]: df
Out[259]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
In [260]: df.query('a in b')
Out[260]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
# How you'd do it in pure Python
In [261]: df[df['a'].isin(df['b'])]
Out[261]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
In [262]: df.query('a not in b')
Out[262]:
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [263]: df[~df['a'].isin(df['b'])]
Out[263]:
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
You can combine this with other expressions for very succinct queries:
# rows where cols a and b have overlapping values
# and col c's values are less than col d's
In [264]: df.query('a in b and c < d')
Out[264]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
4 c b 3 6
5 c b 0 2
# pure Python
In [265]: df[df['b'].isin(df['a']) & (df['c'] < df['d'])]
Out[265]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
4 c b 3 6
5 c b 0 2
10 f c 0 6
11 f c 1 2
Note
Note that in and not in are evaluated in Python, since numexpr
has no equivalent of this operation. However, only the in/not in
expression itself is evaluated in vanilla Python. For example, in the
expression
df.query('a in b + c + d')
(b + c + d) is evaluated by numexpr and then the in
operation is evaluated in plain Python. In general, any operations that can
be evaluated using numexpr will be.
Special use of the == operator with list objects#
Comparing a list of values to a column using ==/!= works similarly
to in/not in.
In [266]: df.query('b == ["a", "b", "c"]')
Out[266]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [267]: df[df['b'].isin(["a", "b", "c"])]
Out[267]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
In [268]: df.query('c == [1, 2]')
Out[268]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
In [269]: df.query('c != [1, 2]')
Out[269]:
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# using in/not in
In [270]: df.query('[1, 2] in c')
Out[270]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
In [271]: df.query('[1, 2] not in c')
Out[271]:
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# pure Python
In [272]: df[df['c'].isin([1, 2])]
Out[272]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
Boolean operators#
You can negate boolean expressions with the word not or the ~ operator.
In [273]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [274]: df['bools'] = np.random.rand(len(df)) > 0.5
In [275]: df.query('~bools')
Out[275]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
In [276]: df.query('not bools')
Out[276]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
In [277]: df.query('not bools') == df[~df['bools']]
Out[277]:
a b c bools
2 True True True True
7 True True True True
8 True True True True
Of course, expressions can be arbitrarily complex too:
# short query syntax
In [278]: shorter = df.query('a < b < c and (not bools) or bools > 2')
# equivalent in pure Python
In [279]: longer = df[(df['a'] < df['b'])
.....: & (df['b'] < df['c'])
.....: & (~df['bools'])
.....: | (df['bools'] > 2)]
.....:
In [280]: shorter
Out[280]:
a b c bools
7 0.275396 0.691034 0.826619 False
In [281]: longer
Out[281]:
a b c bools
7 0.275396 0.691034 0.826619 False
In [282]: shorter == longer
Out[282]:
a b c bools
7 True True True True
Performance of query()#
DataFrame.query() using numexpr is slightly faster than Python for
large frames.
Note
You will only see the performance benefits of using the numexpr engine
with DataFrame.query() if your frame has more than approximately 200,000
rows.
This plot was created using a DataFrame with 3 columns each containing
floating point values generated using numpy.random.randn().
Duplicate data#
If you want to identify and remove duplicate rows in a DataFrame, there are
two methods that will help: duplicated and drop_duplicates. Each
takes as an argument the columns to use to identify duplicated rows.
duplicated returns a boolean vector whose length is the number of rows, and which indicates whether a row is duplicated.
drop_duplicates removes duplicate rows.
By default, the first observed row of a duplicate set is considered unique, but
each method has a keep parameter to specify targets to be kept.
keep='first' (default): mark / drop duplicates except for the first occurrence.
keep='last': mark / drop duplicates except for the last occurrence.
keep=False: mark / drop all duplicates.
In [283]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four'],
.....: 'b': ['x', 'y', 'x', 'y', 'x', 'x', 'x'],
.....: 'c': np.random.randn(7)})
.....:
In [284]: df2
Out[284]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
In [285]: df2.duplicated('a')
Out[285]:
0 False
1 True
2 False
3 True
4 True
5 False
6 False
dtype: bool
In [286]: df2.duplicated('a', keep='last')
Out[286]:
0 True
1 False
2 True
3 True
4 False
5 False
6 False
dtype: bool
In [287]: df2.duplicated('a', keep=False)
Out[287]:
0 True
1 True
2 True
3 True
4 True
5 False
6 False
dtype: bool
In [288]: df2.drop_duplicates('a')
Out[288]:
a b c
0 one x -1.067137
2 two x -0.211056
5 three x -1.964475
6 four x 1.298329
In [289]: df2.drop_duplicates('a', keep='last')
Out[289]:
a b c
1 one y 0.309500
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
In [290]: df2.drop_duplicates('a', keep=False)
Out[290]:
a b c
5 three x -1.964475
6 four x 1.298329
Also, you can pass a list of columns to identify duplications.
In [291]: df2.duplicated(['a', 'b'])
Out[291]:
0 False
1 False
2 False
3 False
4 True
5 False
6 False
dtype: bool
In [292]: df2.drop_duplicates(['a', 'b'])
Out[292]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
5 three x -1.964475
6 four x 1.298329
To drop duplicates by index value, use Index.duplicated then perform slicing.
The same set of options are available for the keep parameter.
In [293]: df3 = pd.DataFrame({'a': np.arange(6),
.....: 'b': np.random.randn(6)},
.....: index=['a', 'a', 'b', 'c', 'b', 'a'])
.....:
In [294]: df3
Out[294]:
a b
a 0 1.440455
a 1 2.456086
b 2 1.038402
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [295]: df3.index.duplicated()
Out[295]: array([False, True, False, False, True, True])
In [296]: df3[~df3.index.duplicated()]
Out[296]:
a b
a 0 1.440455
b 2 1.038402
c 3 -0.894409
In [297]: df3[~df3.index.duplicated(keep='last')]
Out[297]:
a b
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [298]: df3[~df3.index.duplicated(keep=False)]
Out[298]:
a b
c 3 -0.894409
Dictionary-like get() method#
Each of Series or DataFrame have a get method which can return a
default value.
In [299]: s = pd.Series([1, 2, 3], index=['a', 'b', 'c'])
In [300]: s.get('a') # equivalent to s['a']
Out[300]: 1
In [301]: s.get('x', default=-1)
Out[301]: -1
Looking up values by index/column labels#
Sometimes you want to extract a set of values given a sequence of row labels
and column labels, this can be achieved by pandas.factorize and NumPy indexing.
For instance:
In [302]: df = pd.DataFrame({'col': ["A", "A", "B", "B"],
.....: 'A': [80, 23, np.nan, 22],
.....: 'B': [80, 55, 76, 67]})
.....:
In [303]: df
Out[303]:
col A B
0 A 80.0 80
1 A 23.0 55
2 B NaN 76
3 B 22.0 67
In [304]: idx, cols = pd.factorize(df['col'])
In [305]: df.reindex(cols, axis=1).to_numpy()[np.arange(len(df)), idx]
Out[305]: array([80., 23., 76., 67.])
Formerly this could be achieved with the dedicated DataFrame.lookup method
which was deprecated in version 1.2.0.
Index objects#
The pandas Index class and its subclasses can be viewed as
implementing an ordered multiset. Duplicates are allowed. However, if you try
to convert an Index object with duplicate entries into a
set, an exception will be raised.
Index also provides the infrastructure necessary for
lookups, data alignment, and reindexing. The easiest way to create an
Index directly is to pass a list or other sequence to
Index:
In [306]: index = pd.Index(['e', 'd', 'a', 'b'])
In [307]: index
Out[307]: Index(['e', 'd', 'a', 'b'], dtype='object')
In [308]: 'd' in index
Out[308]: True
You can also pass a name to be stored in the index:
In [309]: index = pd.Index(['e', 'd', 'a', 'b'], name='something')
In [310]: index.name
Out[310]: 'something'
The name, if set, will be shown in the console display:
In [311]: index = pd.Index(list(range(5)), name='rows')
In [312]: columns = pd.Index(['A', 'B', 'C'], name='cols')
In [313]: df = pd.DataFrame(np.random.randn(5, 3), index=index, columns=columns)
In [314]: df
Out[314]:
cols A B C
rows
0 1.295989 -1.051694 1.340429
1 -2.366110 0.428241 0.387275
2 0.433306 0.929548 0.278094
3 2.154730 -0.315628 0.264223
4 1.126818 1.132290 -0.353310
In [315]: df['A']
Out[315]:
rows
0 1.295989
1 -2.366110
2 0.433306
3 2.154730
4 1.126818
Name: A, dtype: float64
Setting metadata#
Indexes are “mostly immutable”, but it is possible to set and change their
name attribute. You can use the rename, set_names to set these attributes
directly, and they default to returning a copy.
See Advanced Indexing for usage of MultiIndexes.
In [316]: ind = pd.Index([1, 2, 3])
In [317]: ind.rename("apple")
Out[317]: Int64Index([1, 2, 3], dtype='int64', name='apple')
In [318]: ind
Out[318]: Int64Index([1, 2, 3], dtype='int64')
In [319]: ind.set_names(["apple"], inplace=True)
In [320]: ind.name = "bob"
In [321]: ind
Out[321]: Int64Index([1, 2, 3], dtype='int64', name='bob')
set_names, set_levels, and set_codes also take an optional
level argument
In [322]: index = pd.MultiIndex.from_product([range(3), ['one', 'two']], names=['first', 'second'])
In [323]: index
Out[323]:
MultiIndex([(0, 'one'),
(0, 'two'),
(1, 'one'),
(1, 'two'),
(2, 'one'),
(2, 'two')],
names=['first', 'second'])
In [324]: index.levels[1]
Out[324]: Index(['one', 'two'], dtype='object', name='second')
In [325]: index.set_levels(["a", "b"], level=1)
Out[325]:
MultiIndex([(0, 'a'),
(0, 'b'),
(1, 'a'),
(1, 'b'),
(2, 'a'),
(2, 'b')],
names=['first', 'second'])
Set operations on Index objects#
The two main operations are union and intersection.
Difference is provided via the .difference() method.
In [326]: a = pd.Index(['c', 'b', 'a'])
In [327]: b = pd.Index(['c', 'e', 'd'])
In [328]: a.difference(b)
Out[328]: Index(['a', 'b'], dtype='object')
Also available is the symmetric_difference operation, which returns elements
that appear in either idx1 or idx2, but not in both. This is
equivalent to the Index created by idx1.difference(idx2).union(idx2.difference(idx1)),
with duplicates dropped.
In [329]: idx1 = pd.Index([1, 2, 3, 4])
In [330]: idx2 = pd.Index([2, 3, 4, 5])
In [331]: idx1.symmetric_difference(idx2)
Out[331]: Int64Index([1, 5], dtype='int64')
Note
The resulting index from a set operation will be sorted in ascending order.
When performing Index.union() between indexes with different dtypes, the indexes
must be cast to a common dtype. Typically, though not always, this is object dtype. The
exception is when performing a union between integer and float data. In this case, the
integer values are converted to float
In [332]: idx1 = pd.Index([0, 1, 2])
In [333]: idx2 = pd.Index([0.5, 1.5])
In [334]: idx1.union(idx2)
Out[334]: Float64Index([0.0, 0.5, 1.0, 1.5, 2.0], dtype='float64')
Missing values#
Important
Even though Index can hold missing values (NaN), it should be avoided
if you do not want any unexpected results. For example, some operations
exclude missing values implicitly.
Index.fillna fills missing values with specified scalar value.
In [335]: idx1 = pd.Index([1, np.nan, 3, 4])
In [336]: idx1
Out[336]: Float64Index([1.0, nan, 3.0, 4.0], dtype='float64')
In [337]: idx1.fillna(2)
Out[337]: Float64Index([1.0, 2.0, 3.0, 4.0], dtype='float64')
In [338]: idx2 = pd.DatetimeIndex([pd.Timestamp('2011-01-01'),
.....: pd.NaT,
.....: pd.Timestamp('2011-01-03')])
.....:
In [339]: idx2
Out[339]: DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03'], dtype='datetime64[ns]', freq=None)
In [340]: idx2.fillna(pd.Timestamp('2011-01-02'))
Out[340]: DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], dtype='datetime64[ns]', freq=None)
Set / reset index#
Occasionally you will load or create a data set into a DataFrame and want to
add an index after you’ve already done so. There are a couple of different
ways.
Set an index#
DataFrame has a set_index() method which takes a column name
(for a regular Index) or a list of column names (for a MultiIndex).
To create a new, re-indexed DataFrame:
In [341]: data
Out[341]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
In [342]: indexed1 = data.set_index('c')
In [343]: indexed1
Out[343]:
a b d
c
z bar one 1.0
y bar two 2.0
x foo one 3.0
w foo two 4.0
In [344]: indexed2 = data.set_index(['a', 'b'])
In [345]: indexed2
Out[345]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
The append keyword option allow you to keep the existing index and append
the given columns to a MultiIndex:
In [346]: frame = data.set_index('c', drop=False)
In [347]: frame = frame.set_index(['a', 'b'], append=True)
In [348]: frame
Out[348]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
Other options in set_index allow you not drop the index columns or to add
the index in-place (without creating a new object):
In [349]: data.set_index('c', drop=False)
Out[349]:
a b c d
c
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [350]: data.set_index(['a', 'b'], inplace=True)
In [351]: data
Out[351]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
Reset the index#
As a convenience, there is a new function on DataFrame called
reset_index() which transfers the index values into the
DataFrame’s columns and sets a simple integer index.
This is the inverse operation of set_index().
In [352]: data
Out[352]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
In [353]: data.reset_index()
Out[353]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
The output is more similar to a SQL table or a record array. The names for the
columns derived from the index are the ones stored in the names attribute.
You can use the level keyword to remove only a portion of the index:
In [354]: frame
Out[354]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [355]: frame.reset_index(level=1)
Out[355]:
a c d
c b
z one bar z 1.0
y two bar y 2.0
x one foo x 3.0
w two foo w 4.0
reset_index takes an optional parameter drop which if true simply
discards the index, instead of putting index values in the DataFrame’s columns.
Adding an ad hoc index#
If you create an index yourself, you can just assign it to the index field:
data.index = index
Returning a view versus a copy#
When setting values in a pandas object, care must be taken to avoid what is called
chained indexing. Here is an example.
In [356]: dfmi = pd.DataFrame([list('abcd'),
.....: list('efgh'),
.....: list('ijkl'),
.....: list('mnop')],
.....: columns=pd.MultiIndex.from_product([['one', 'two'],
.....: ['first', 'second']]))
.....:
In [357]: dfmi
Out[357]:
one two
first second first second
0 a b c d
1 e f g h
2 i j k l
3 m n o p
Compare these two access methods:
In [358]: dfmi['one']['second']
Out[358]:
0 b
1 f
2 j
3 n
Name: second, dtype: object
In [359]: dfmi.loc[:, ('one', 'second')]
Out[359]:
0 b
1 f
2 j
3 n
Name: (one, second), dtype: object
These both yield the same results, so which should you use? It is instructive to understand the order
of operations on these and why method 2 (.loc) is much preferred over method 1 (chained []).
dfmi['one'] selects the first level of the columns and returns a DataFrame that is singly-indexed.
Then another Python operation dfmi_with_one['second'] selects the series indexed by 'second'.
This is indicated by the variable dfmi_with_one because pandas sees these operations as separate events.
e.g. separate calls to __getitem__, so it has to treat them as linear operations, they happen one after another.
Contrast this to df.loc[:,('one','second')] which passes a nested tuple of (slice(None),('one','second')) to a single call to
__getitem__. This allows pandas to deal with this as a single entity. Furthermore this order of operations can be significantly
faster, and allows one to index both axes if so desired.
Why does assignment fail when using chained indexing?#
The problem in the previous section is just a performance issue. What’s up with
the SettingWithCopy warning? We don’t usually throw warnings around when
you do something that might cost a few extra milliseconds!
But it turns out that assigning to the product of chained indexing has
inherently unpredictable results. To see this, think about how the Python
interpreter executes this code:
dfmi.loc[:, ('one', 'second')] = value
# becomes
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)
But this code is handled differently:
dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)
See that __getitem__ in there? Outside of simple cases, it’s very hard to
predict whether it will return a view or a copy (it depends on the memory layout
of the array, about which pandas makes no guarantees), and therefore whether
the __setitem__ will modify dfmi or a temporary object that gets thrown
out immediately afterward. That’s what SettingWithCopy is warning you
about!
Note
You may be wondering whether we should be concerned about the loc
property in the first example. But dfmi.loc is guaranteed to be dfmi
itself with modified indexing behavior, so dfmi.loc.__getitem__ /
dfmi.loc.__setitem__ operate on dfmi directly. Of course,
dfmi.loc.__getitem__(idx) may be a view or a copy of dfmi.
Sometimes a SettingWithCopy warning will arise at times when there’s no
obvious chained indexing going on. These are the bugs that
SettingWithCopy is designed to catch! pandas is probably trying to warn you
that you’ve done this:
def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
# We don't know whether this will modify df or not!
foo['quux'] = value
return foo
Yikes!
Evaluation order matters#
When you use chained indexing, the order and type of the indexing operation
partially determine whether the result is a slice into the original object, or
a copy of the slice.
pandas has the SettingWithCopyWarning because assigning to a copy of a
slice is frequently not intentional, but a mistake caused by chained indexing
returning a copy where a slice was expected.
If you would like pandas to be more or less trusting about assignment to a
chained indexing expression, you can set the option
mode.chained_assignment to one of these values:
'warn', the default, means a SettingWithCopyWarning is printed.
'raise' means pandas will raise a SettingWithCopyError
you have to deal with.
None will suppress the warnings entirely.
In [360]: dfb = pd.DataFrame({'a': ['one', 'one', 'two',
.....: 'three', 'two', 'one', 'six'],
.....: 'c': np.arange(7)})
.....:
# This will show the SettingWithCopyWarning
# but the frame values will be set
In [361]: dfb['c'][dfb['a'].str.startswith('o')] = 42
This however is operating on a copy and will not work.
>>> pd.set_option('mode.chained_assignment','warn')
>>> dfb[dfb['a'].str.startswith('o')]['c'] = 42
Traceback (most recent call last)
...
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
A chained assignment can also crop up in setting in a mixed dtype frame.
Note
These setting rules apply to all of .loc/.iloc.
The following is the recommended access method using .loc for multiple items (using mask) and a single item using a fixed index:
In [362]: dfc = pd.DataFrame({'a': ['one', 'one', 'two',
.....: 'three', 'two', 'one', 'six'],
.....: 'c': np.arange(7)})
.....:
In [363]: dfd = dfc.copy()
# Setting multiple items using a mask
In [364]: mask = dfd['a'].str.startswith('o')
In [365]: dfd.loc[mask, 'c'] = 42
In [366]: dfd
Out[366]:
a c
0 one 42
1 one 42
2 two 2
3 three 3
4 two 4
5 one 42
6 six 6
# Setting a single item
In [367]: dfd = dfc.copy()
In [368]: dfd.loc[2, 'a'] = 11
In [369]: dfd
Out[369]:
a c
0 one 0
1 one 1
2 11 2
3 three 3
4 two 4
5 one 5
6 six 6
The following can work at times, but it is not guaranteed to, and therefore should be avoided:
In [370]: dfd = dfc.copy()
In [371]: dfd['a'][2] = 111
In [372]: dfd
Out[372]:
a c
0 one 0
1 one 1
2 111 2
3 three 3
4 two 4
5 one 5
6 six 6
Last, the subsequent example will not work at all, and so should be avoided:
>>> pd.set_option('mode.chained_assignment','raise')
>>> dfd.loc[0]['a'] = 1111
Traceback (most recent call last)
...
SettingWithCopyError:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
Warning
The chained assignment warnings / exceptions are aiming to inform the user of a possibly invalid
assignment. There may be false positives; situations where a chained assignment is inadvertently
reported.
| 851
| 1,256
|
How to filter multiple rows with explicit values and select a column using pandas
Selecting Multiple Explicit Rows and Column to then Calculate Mean
Hi Everyone thank you for taking a look at this question. I'm working with a pokémon data set and looking to select explicit rows and a column to then calculate the mean for that value.
The Columns worked with are type_1 , generation and total_points.
The Row values are Grass and 1
Grass corresponds to the type and 1 to the generation.
grass_total_points = pokedex.loc[pokedex.type_1 == 'Grass', ['total_points']].mean()
This code above works and returns the total mean for all grass types across all 8 generations but I would like to retrieve them on a generation by generation basis.
gen_1 = pokedex.loc[pokedex['generation'] == '1' & pokedex['type_1'] == 'Grass', ['total_points']].mean()
I attempted the code above, with no luck I searched around and cannot find any answers to this.
|
64,046,556
|
Nested loop results in table, Python
|
<p>I need to loop a computation over two lists of elements and save the results in a table. So, say that</p>
<pre class="lang-py prettyprint-override"><code>months = [1,2,3,4,5]
Region = ['Region1', 'Region2']
</code></pre>
<p>and that my code is of the type</p>
<pre class="lang-py prettyprint-override"><code>df=[]
for month in month:
for region in Region:
code
x = result
df.append(x)
</code></pre>
<p>What I cannot achieve is rendering the final result in a table in which the rows are regions and coumns are months</p>
<pre><code> 1 2 3 4 5
Region1 a b c d e
Region2 f g h i j
</code></pre>
| 64,047,465
| 2020-09-24T12:34:36.907000
| 3
| null | 0
| 87
|
python|pandas
|
<p>Another solution - more lines of code:</p>
<pre><code>import pandas as pd
import ast
months = [1,2,3,4,5]
Regions = ['Region1', 'Region2']
df = pd.DataFrame()
for region in Regions:
row = '{\'Region\': \'' + region +'\', '
for month in months:
# put your calculation code
x = month + 1
row = row + '\'' + str(month) + '\':[' + str(x) + '],'
row = row[:len(row)-1] + '}'
row = ast.literal_eval(row)
df = df.append(pd.DataFrame(row))
df
</code></pre>
| 2020-09-24T13:25:44.680000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.json_normalize.html
|
pandas.json_normalize#
pandas.json_normalize#
pandas.json_normalize(data, record_path=None, meta=None, meta_prefix=None, record_prefix=None, errors='raise', sep='.', max_level=None)[source]#
Normalize semi-structured JSON data into a flat table.
Another solution - more lines of code:
import pandas as pd
import ast
months = [1,2,3,4,5]
Regions = ['Region1', 'Region2']
df = pd.DataFrame()
for region in Regions:
row = '{\'Region\': \'' + region +'\', '
for month in months:
# put your calculation code
x = month + 1
row = row + '\'' + str(month) + '\':[' + str(x) + '],'
row = row[:len(row)-1] + '}'
row = ast.literal_eval(row)
df = df.append(pd.DataFrame(row))
df
Parameters
datadict or list of dictsUnserialized JSON objects.
record_pathstr or list of str, default NonePath in each object to list of records. If not passed, data will be
assumed to be an array of records.
metalist of paths (str or list of str), default NoneFields to use as metadata for each record in resulting table.
meta_prefixstr, default NoneIf True, prefix records with dotted (?) path, e.g. foo.bar.field if
meta is [‘foo’, ‘bar’].
record_prefixstr, default NoneIf True, prefix records with dotted (?) path, e.g. foo.bar.field if
path to records is [‘foo’, ‘bar’].
errors{‘raise’, ‘ignore’}, default ‘raise’Configures error handling.
‘ignore’ : will ignore KeyError if keys listed in meta are not
always present.
‘raise’ : will raise KeyError if keys listed in meta are not
always present.
sepstr, default ‘.’Nested records will generate names separated by sep.
e.g., for sep=’.’, {‘foo’: {‘bar’: 0}} -> foo.bar.
max_levelint, default NoneMax number of levels(depth of dict) to normalize.
if None, normalizes all levels.
New in version 0.25.0.
Returns
frameDataFrame
Normalize semi-structured JSON data into a flat table.
Examples
>>> data = [
... {"id": 1, "name": {"first": "Coleen", "last": "Volk"}},
... {"name": {"given": "Mark", "family": "Regner"}},
... {"id": 2, "name": "Faye Raker"},
... ]
>>> pd.json_normalize(data)
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mark Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker
>>> data = [
... {
... "id": 1,
... "name": "Cole Volk",
... "fitness": {"height": 130, "weight": 60},
... },
... {"name": "Mark Reg", "fitness": {"height": 130, "weight": 60}},
... {
... "id": 2,
... "name": "Faye Raker",
... "fitness": {"height": 130, "weight": 60},
... },
... ]
>>> pd.json_normalize(data, max_level=0)
id name fitness
0 1.0 Cole Volk {'height': 130, 'weight': 60}
1 NaN Mark Reg {'height': 130, 'weight': 60}
2 2.0 Faye Raker {'height': 130, 'weight': 60}
Normalizes nested data up to level 1.
>>> data = [
... {
... "id": 1,
... "name": "Cole Volk",
... "fitness": {"height": 130, "weight": 60},
... },
... {"name": "Mark Reg", "fitness": {"height": 130, "weight": 60}},
... {
... "id": 2,
... "name": "Faye Raker",
... "fitness": {"height": 130, "weight": 60},
... },
... ]
>>> pd.json_normalize(data, max_level=1)
id name fitness.height fitness.weight
0 1.0 Cole Volk 130 60
1 NaN Mark Reg 130 60
2 2.0 Faye Raker 130 60
>>> data = [
... {
... "state": "Florida",
... "shortname": "FL",
... "info": {"governor": "Rick Scott"},
... "counties": [
... {"name": "Dade", "population": 12345},
... {"name": "Broward", "population": 40000},
... {"name": "Palm Beach", "population": 60000},
... ],
... },
... {
... "state": "Ohio",
... "shortname": "OH",
... "info": {"governor": "John Kasich"},
... "counties": [
... {"name": "Summit", "population": 1234},
... {"name": "Cuyahoga", "population": 1337},
... ],
... },
... ]
>>> result = pd.json_normalize(
... data, "counties", ["state", "shortname", ["info", "governor"]]
... )
>>> result
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
>>> data = {"A": [1, 2]}
>>> pd.json_normalize(data, "A", record_prefix="Prefix.")
Prefix.0
0 1
1 2
Returns normalized data with columns prefixed with the given string.
| 250
| 723
|
Nested loop results in table, Python
I need to loop a computation over two lists of elements and save the results in a table. So, say that
months = [1,2,3,4,5]
Region = ['Region1', 'Region2']
and that my code is of the type
df=[]
for month in month:
for region in Region:
code
x = result
df.append(x)
What I cannot achieve is rendering the final result in a table in which the rows are regions and coumns are months
1 2 3 4 5
Region1 a b c d e
Region2 f g h i j
|
62,598,915
|
'|'.join is not working for a list of string with more that one item
|
<p>I have the following list of strings and the code:</p>
<pre><code>mylist_bus = ["AAG","BOS","Ext"]
df.loc[df['lineId_EOD'].str.contains('AAG')]
</code></pre>
<p>with the following results:</p>
<pre><code>ActivityType_EOD lineId_EOD
leg AAG_line7
leg AAG_line50
</code></pre>
<p>Then I want to add one more column for these specific values:</p>
<pre><code>for i, row in df.iterrows():
if '|'.join(mylist_bus) in df.loc[i, "lineId_EOD"]:
df.loc[i,"category_EOD"] = "bus"
df.loc[df["lineId_EOD"].str.contains('AAG')]
</code></pre>
<p>However, the result is the same as before, and nothing changes:</p>
<pre><code>ActivityType_EOD lineId_EOD
leg AAG_line7
leg AAG_line50
</code></pre>
<p>when I reduce the list to just one String, for example:</p>
<pre><code>mylist_bus = ["AAG"]
</code></pre>
<p>then it works fine and I have the results:</p>
<pre><code>ActivityType_EOD lineId_EOD category_EOD
leg AAG_line7 bus
leg AAG_line50 bus
</code></pre>
<p>but I need to have a list of more than one string.</p>
| 62,599,060
| 2020-06-26T16:22:28.647000
| 3
| null | 0
| 87
|
python|pandas
|
<p>You could use <code>any</code> and a generator:</p>
<pre><code>for i, row in df.iterrows():
if any(x in df.loc[i, "lineId_EOD"] for x in mylist_bus):
df.loc[i,"category_EOD"] = "bus"
</code></pre>
| 2020-06-26T16:30:42.237000
| 1
|
https://pandas.pydata.org/docs/user_guide/text.html
|
Working with text data#
Working with text data#
Text data types#
New in version 1.0.0.
There are two ways to store text data in pandas:
object -dtype NumPy array.
StringDtype extension type.
We recommend using StringDtype to store text data.
Prior to pandas 1.0, object dtype was the only option. This was unfortunate
for many reasons:
You can accidentally store a mixture of strings and non-strings in an
object dtype array. It’s better to have a dedicated dtype.
object dtype breaks dtype-specific operations like DataFrame.select_dtypes().
There isn’t a clear way to select just text while excluding non-text
but still object-dtype columns.
When reading code, the contents of an object dtype array is less clear
than 'string'.
Currently, the performance of object dtype arrays of strings and
arrays.StringArray are about the same. We expect future enhancements
You could use any and a generator:
for i, row in df.iterrows():
if any(x in df.loc[i, "lineId_EOD"] for x in mylist_bus):
df.loc[i,"category_EOD"] = "bus"
to significantly increase the performance and lower the memory overhead of
StringArray.
Warning
StringArray is currently considered experimental. The implementation
and parts of the API may change without warning.
For backwards-compatibility, object dtype remains the default type we
infer a list of strings to
In [1]: pd.Series(["a", "b", "c"])
Out[1]:
0 a
1 b
2 c
dtype: object
To explicitly request string dtype, specify the dtype
In [2]: pd.Series(["a", "b", "c"], dtype="string")
Out[2]:
0 a
1 b
2 c
dtype: string
In [3]: pd.Series(["a", "b", "c"], dtype=pd.StringDtype())
Out[3]:
0 a
1 b
2 c
dtype: string
Or astype after the Series or DataFrame is created
In [4]: s = pd.Series(["a", "b", "c"])
In [5]: s
Out[5]:
0 a
1 b
2 c
dtype: object
In [6]: s.astype("string")
Out[6]:
0 a
1 b
2 c
dtype: string
Changed in version 1.1.0.
You can also use StringDtype/"string" as the dtype on non-string data and
it will be converted to string dtype:
In [7]: s = pd.Series(["a", 2, np.nan], dtype="string")
In [8]: s
Out[8]:
0 a
1 2
2 <NA>
dtype: string
In [9]: type(s[1])
Out[9]: str
or convert from existing pandas data:
In [10]: s1 = pd.Series([1, 2, np.nan], dtype="Int64")
In [11]: s1
Out[11]:
0 1
1 2
2 <NA>
dtype: Int64
In [12]: s2 = s1.astype("string")
In [13]: s2
Out[13]:
0 1
1 2
2 <NA>
dtype: string
In [14]: type(s2[0])
Out[14]: str
Behavior differences#
These are places where the behavior of StringDtype objects differ from
object dtype
For StringDtype, string accessor methods
that return numeric output will always return a nullable integer dtype,
rather than either int or float dtype, depending on the presence of NA values.
Methods returning boolean output will return a nullable boolean dtype.
In [15]: s = pd.Series(["a", None, "b"], dtype="string")
In [16]: s
Out[16]:
0 a
1 <NA>
2 b
dtype: string
In [17]: s.str.count("a")
Out[17]:
0 1
1 <NA>
2 0
dtype: Int64
In [18]: s.dropna().str.count("a")
Out[18]:
0 1
2 0
dtype: Int64
Both outputs are Int64 dtype. Compare that with object-dtype
In [19]: s2 = pd.Series(["a", None, "b"], dtype="object")
In [20]: s2.str.count("a")
Out[20]:
0 1.0
1 NaN
2 0.0
dtype: float64
In [21]: s2.dropna().str.count("a")
Out[21]:
0 1
2 0
dtype: int64
When NA values are present, the output dtype is float64. Similarly for
methods returning boolean values.
In [22]: s.str.isdigit()
Out[22]:
0 False
1 <NA>
2 False
dtype: boolean
In [23]: s.str.match("a")
Out[23]:
0 True
1 <NA>
2 False
dtype: boolean
Some string methods, like Series.str.decode() are not available
on StringArray because StringArray only holds strings, not
bytes.
In comparison operations, arrays.StringArray and Series backed
by a StringArray will return an object with BooleanDtype,
rather than a bool dtype object. Missing values in a StringArray
will propagate in comparison operations, rather than always comparing
unequal like numpy.nan.
Everything else that follows in the rest of this document applies equally to
string and object dtype.
String methods#
Series and Index are equipped with a set of string processing methods
that make it easy to operate on each element of the array. Perhaps most
importantly, these methods exclude missing/NA values automatically. These are
accessed via the str attribute and generally have names matching
the equivalent (scalar) built-in string methods:
In [24]: s = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
....: )
....:
In [25]: s.str.lower()
Out[25]:
0 a
1 b
2 c
3 aaba
4 baca
5 <NA>
6 caba
7 dog
8 cat
dtype: string
In [26]: s.str.upper()
Out[26]:
0 A
1 B
2 C
3 AABA
4 BACA
5 <NA>
6 CABA
7 DOG
8 CAT
dtype: string
In [27]: s.str.len()
Out[27]:
0 1
1 1
2 1
3 4
4 4
5 <NA>
6 4
7 3
8 3
dtype: Int64
In [28]: idx = pd.Index([" jack", "jill ", " jesse ", "frank"])
In [29]: idx.str.strip()
Out[29]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
In [30]: idx.str.lstrip()
Out[30]: Index(['jack', 'jill ', 'jesse ', 'frank'], dtype='object')
In [31]: idx.str.rstrip()
Out[31]: Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')
The string methods on Index are especially useful for cleaning up or
transforming DataFrame columns. For instance, you may have columns with
leading or trailing whitespace:
In [32]: df = pd.DataFrame(
....: np.random.randn(3, 2), columns=[" Column A ", " Column B "], index=range(3)
....: )
....:
In [33]: df
Out[33]:
Column A Column B
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Since df.columns is an Index object, we can use the .str accessor
In [34]: df.columns.str.strip()
Out[34]: Index(['Column A', 'Column B'], dtype='object')
In [35]: df.columns.str.lower()
Out[35]: Index([' column a ', ' column b '], dtype='object')
These string methods can then be used to clean up the columns as needed.
Here we are removing leading and trailing whitespaces, lower casing all names,
and replacing any remaining whitespaces with underscores:
In [36]: df.columns = df.columns.str.strip().str.lower().str.replace(" ", "_")
In [37]: df
Out[37]:
column_a column_b
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Note
If you have a Series where lots of elements are repeated
(i.e. the number of unique elements in the Series is a lot smaller than the length of the
Series), it can be faster to convert the original Series to one of type
category and then use .str.<method> or .dt.<property> on that.
The performance difference comes from the fact that, for Series of type category, the
string operations are done on the .categories and not on each element of the
Series.
Please note that a Series of type category with string .categories has
some limitations in comparison to Series of type string (e.g. you can’t add strings to
each other: s + " " + s won’t work if s is a Series of type category). Also,
.str methods which operate on elements of type list are not available on such a
Series.
Warning
Before v.0.25.0, the .str-accessor did only the most rudimentary type checks. Starting with
v.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.
Generally speaking, the .str accessor is intended to work only on strings. With very few
exceptions, other uses are not supported, and may be disabled at a later point.
Splitting and replacing strings#
Methods like split return a Series of lists:
In [38]: s2 = pd.Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype="string")
In [39]: s2.str.split("_")
Out[39]:
0 [a, b, c]
1 [c, d, e]
2 <NA>
3 [f, g, h]
dtype: object
Elements in the split lists can be accessed using get or [] notation:
In [40]: s2.str.split("_").str.get(1)
Out[40]:
0 b
1 d
2 <NA>
3 g
dtype: object
In [41]: s2.str.split("_").str[1]
Out[41]:
0 b
1 d
2 <NA>
3 g
dtype: object
It is easy to expand this to return a DataFrame using expand.
In [42]: s2.str.split("_", expand=True)
Out[42]:
0 1 2
0 a b c
1 c d e
2 <NA> <NA> <NA>
3 f g h
When original Series has StringDtype, the output columns will all
be StringDtype as well.
It is also possible to limit the number of splits:
In [43]: s2.str.split("_", expand=True, n=1)
Out[43]:
0 1
0 a b_c
1 c d_e
2 <NA> <NA>
3 f g_h
rsplit is similar to split except it works in the reverse direction,
i.e., from the end of the string to the beginning of the string:
In [44]: s2.str.rsplit("_", expand=True, n=1)
Out[44]:
0 1
0 a_b c
1 c_d e
2 <NA> <NA>
3 f_g h
replace optionally uses regular expressions:
In [45]: s3 = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", "", np.nan, "CABA", "dog", "cat"],
....: dtype="string",
....: )
....:
In [46]: s3
Out[46]:
0 A
1 B
2 C
3 Aaba
4 Baca
5
6 <NA>
7 CABA
8 dog
9 cat
dtype: string
In [47]: s3.str.replace("^.a|dog", "XX-XX ", case=False, regex=True)
Out[47]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Warning
Some caution must be taken when dealing with regular expressions! The current behavior
is to treat single character patterns as literal strings, even when regex is set
to True. This behavior is deprecated and will be removed in a future version so
that the regex keyword is always respected.
Changed in version 1.2.0.
If you want literal replacement of a string (equivalent to str.replace()), you
can set the optional regex parameter to False, rather than escaping each
character. In this case both pat and repl must be strings:
In [48]: dollars = pd.Series(["12", "-$10", "$10,000"], dtype="string")
# These lines are equivalent
In [49]: dollars.str.replace(r"-\$", "-", regex=True)
Out[49]:
0 12
1 -10
2 $10,000
dtype: string
In [50]: dollars.str.replace("-$", "-", regex=False)
Out[50]:
0 12
1 -10
2 $10,000
dtype: string
The replace method can also take a callable as replacement. It is called
on every pat using re.sub(). The callable should expect one
positional argument (a regex object) and return a string.
# Reverse every lowercase alphabetic word
In [51]: pat = r"[a-z]+"
In [52]: def repl(m):
....: return m.group(0)[::-1]
....:
In [53]: pd.Series(["foo 123", "bar baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[53]:
0 oof 123
1 rab zab
2 <NA>
dtype: string
# Using regex groups
In [54]: pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"
In [55]: def repl(m):
....: return m.group("two").swapcase()
....:
In [56]: pd.Series(["Foo Bar Baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[56]:
0 bAR
1 <NA>
dtype: string
The replace method also accepts a compiled regular expression object
from re.compile() as a pattern. All flags should be included in the
compiled regular expression object.
In [57]: import re
In [58]: regex_pat = re.compile(r"^.a|dog", flags=re.IGNORECASE)
In [59]: s3.str.replace(regex_pat, "XX-XX ", regex=True)
Out[59]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Including a flags argument when calling replace with a compiled
regular expression object will raise a ValueError.
In [60]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE)
---------------------------------------------------------------------------
ValueError: case and flags cannot be set when pat is a compiled regex
removeprefix and removesuffix have the same effect as str.removeprefix and str.removesuffix added in Python 3.9
<https://docs.python.org/3/library/stdtypes.html#str.removeprefix>`__:
New in version 1.4.0.
In [61]: s = pd.Series(["str_foo", "str_bar", "no_prefix"])
In [62]: s.str.removeprefix("str_")
Out[62]:
0 foo
1 bar
2 no_prefix
dtype: object
In [63]: s = pd.Series(["foo_str", "bar_str", "no_suffix"])
In [64]: s.str.removesuffix("_str")
Out[64]:
0 foo
1 bar
2 no_suffix
dtype: object
Concatenation#
There are several ways to concatenate a Series or Index, either with itself or others, all based on cat(),
resp. Index.str.cat.
Concatenating a single Series into a string#
The content of a Series (or Index) can be concatenated:
In [65]: s = pd.Series(["a", "b", "c", "d"], dtype="string")
In [66]: s.str.cat(sep=",")
Out[66]: 'a,b,c,d'
If not specified, the keyword sep for the separator defaults to the empty string, sep='':
In [67]: s.str.cat()
Out[67]: 'abcd'
By default, missing values are ignored. Using na_rep, they can be given a representation:
In [68]: t = pd.Series(["a", "b", np.nan, "d"], dtype="string")
In [69]: t.str.cat(sep=",")
Out[69]: 'a,b,d'
In [70]: t.str.cat(sep=",", na_rep="-")
Out[70]: 'a,b,-,d'
Concatenating a Series and something list-like into a Series#
The first argument to cat() can be a list-like object, provided that it matches the length of the calling Series (or Index).
In [71]: s.str.cat(["A", "B", "C", "D"])
Out[71]:
0 aA
1 bB
2 cC
3 dD
dtype: string
Missing values on either side will result in missing values in the result as well, unless na_rep is specified:
In [72]: s.str.cat(t)
Out[72]:
0 aa
1 bb
2 <NA>
3 dd
dtype: string
In [73]: s.str.cat(t, na_rep="-")
Out[73]:
0 aa
1 bb
2 c-
3 dd
dtype: string
Concatenating a Series and something array-like into a Series#
The parameter others can also be two-dimensional. In this case, the number or rows must match the lengths of the calling Series (or Index).
In [74]: d = pd.concat([t, s], axis=1)
In [75]: s
Out[75]:
0 a
1 b
2 c
3 d
dtype: string
In [76]: d
Out[76]:
0 1
0 a a
1 b b
2 <NA> c
3 d d
In [77]: s.str.cat(d, na_rep="-")
Out[77]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and an indexed object into a Series, with alignment#
For concatenation with a Series or DataFrame, it is possible to align the indexes before concatenation by setting
the join-keyword.
In [78]: u = pd.Series(["b", "d", "a", "c"], index=[1, 3, 0, 2], dtype="string")
In [79]: s
Out[79]:
0 a
1 b
2 c
3 d
dtype: string
In [80]: u
Out[80]:
1 b
3 d
0 a
2 c
dtype: string
In [81]: s.str.cat(u)
Out[81]:
0 aa
1 bb
2 cc
3 dd
dtype: string
In [82]: s.str.cat(u, join="left")
Out[82]:
0 aa
1 bb
2 cc
3 dd
dtype: string
Warning
If the join keyword is not passed, the method cat() will currently fall back to the behavior before version 0.23.0 (i.e. no alignment),
but a FutureWarning will be raised if any of the involved indexes differ, since this default will change to join='left' in a future version.
The usual options are available for join (one of 'left', 'outer', 'inner', 'right').
In particular, alignment also means that the different lengths do not need to coincide anymore.
In [83]: v = pd.Series(["z", "a", "b", "d", "e"], index=[-1, 0, 1, 3, 4], dtype="string")
In [84]: s
Out[84]:
0 a
1 b
2 c
3 d
dtype: string
In [85]: v
Out[85]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [86]: s.str.cat(v, join="left", na_rep="-")
Out[86]:
0 aa
1 bb
2 c-
3 dd
dtype: string
In [87]: s.str.cat(v, join="outer", na_rep="-")
Out[87]:
-1 -z
0 aa
1 bb
2 c-
3 dd
4 -e
dtype: string
The same alignment can be used when others is a DataFrame:
In [88]: f = d.loc[[3, 2, 1, 0], :]
In [89]: s
Out[89]:
0 a
1 b
2 c
3 d
dtype: string
In [90]: f
Out[90]:
0 1
3 d d
2 <NA> c
1 b b
0 a a
In [91]: s.str.cat(f, join="left", na_rep="-")
Out[91]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and many objects into a Series#
Several array-like items (specifically: Series, Index, and 1-dimensional variants of np.ndarray)
can be combined in a list-like container (including iterators, dict-views, etc.).
In [92]: s
Out[92]:
0 a
1 b
2 c
3 d
dtype: string
In [93]: u
Out[93]:
1 b
3 d
0 a
2 c
dtype: string
In [94]: s.str.cat([u, u.to_numpy()], join="left")
Out[94]:
0 aab
1 bbd
2 cca
3 ddc
dtype: string
All elements without an index (e.g. np.ndarray) within the passed list-like must match in length to the calling Series (or Index),
but Series and Index may have arbitrary length (as long as alignment is not disabled with join=None):
In [95]: v
Out[95]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [96]: s.str.cat([v, u, u.to_numpy()], join="outer", na_rep="-")
Out[96]:
-1 -z--
0 aaab
1 bbbd
2 c-ca
3 dddc
4 -e--
dtype: string
If using join='right' on a list-like of others that contains different indexes,
the union of these indexes will be used as the basis for the final concatenation:
In [97]: u.loc[[3]]
Out[97]:
3 d
dtype: string
In [98]: v.loc[[-1, 0]]
Out[98]:
-1 z
0 a
dtype: string
In [99]: s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join="right", na_rep="-")
Out[99]:
3 dd-
-1 --z
0 a-a
dtype: string
Indexing with .str#
You can use [] notation to directly index by position locations. If you index past the end
of the string, the result will be a NaN.
In [100]: s = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [101]: s.str[0]
Out[101]:
0 A
1 B
2 C
3 A
4 B
5 <NA>
6 C
7 d
8 c
dtype: string
In [102]: s.str[1]
Out[102]:
0 <NA>
1 <NA>
2 <NA>
3 a
4 a
5 <NA>
6 A
7 o
8 a
dtype: string
Extracting substrings#
Extract first match in each subject (extract)#
Warning
Before version 0.23, argument expand of the extract method defaulted to
False. When expand=False, expand returns a Series, Index, or
DataFrame, depending on the subject and regular expression
pattern. When expand=True, it always returns a DataFrame,
which is more consistent and less confusing from the perspective of a user.
expand=True has been the default since version 0.23.0.
The extract method accepts a regular expression with at least one
capture group.
Extracting a regular expression with more than one group returns a
DataFrame with one column per group.
In [103]: pd.Series(
.....: ["a1", "b2", "c3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])(\d)", expand=False)
.....:
Out[103]:
0 1
0 a 1
1 b 2
2 <NA> <NA>
Elements that do not match return a row filled with NaN. Thus, a
Series of messy strings can be “converted” into a like-indexed Series
or DataFrame of cleaned-up or more useful strings, without
necessitating get() to access tuples or re.match objects. The
dtype of the result is always object, even if no match is found and
the result only contains NaN.
Named groups like
In [104]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(
.....: r"(?P<letter>[ab])(?P<digit>\d)", expand=False
.....: )
.....:
Out[104]:
letter digit
0 a 1
1 b 2
2 <NA> <NA>
and optional groups like
In [105]: pd.Series(
.....: ["a1", "b2", "3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])?(\d)", expand=False)
.....:
Out[105]:
0 1
0 a 1
1 b 2
2 <NA> 3
can also be used. Note that any capture group names in the regular
expression will be used for column names; otherwise capture group
numbers will be used.
Extracting a regular expression with one group returns a DataFrame
with one column if expand=True.
In [106]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=True)
Out[106]:
0
0 1
1 2
2 <NA>
It returns a Series if expand=False.
In [107]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=False)
Out[107]:
0 1
1 2
2 <NA>
dtype: string
Calling on an Index with a regex with exactly one capture group
returns a DataFrame with one column if expand=True.
In [108]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"], dtype="string")
In [109]: s
Out[109]:
A11 a1
B22 b2
C33 c3
dtype: string
In [110]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=True)
Out[110]:
letter
0 A
1 B
2 C
It returns an Index if expand=False.
In [111]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=False)
Out[111]: Index(['A', 'B', 'C'], dtype='object', name='letter')
Calling on an Index with a regex with more than one capture group
returns a DataFrame if expand=True.
In [112]: s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=True)
Out[112]:
letter 1
0 A 11
1 B 22
2 C 33
It raises ValueError if expand=False.
>>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
ValueError: only one regex group is supported with Index
The table below summarizes the behavior of extract(expand=False)
(input subject in first column, number of groups in regex in
first row)
1 group
>1 group
Index
Index
ValueError
Series
Series
DataFrame
Extract all matches in each subject (extractall)#
Unlike extract (which returns only the first match),
In [113]: s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"], dtype="string")
In [114]: s
Out[114]:
A a1a2
B b1
C c1
dtype: string
In [115]: two_groups = "(?P<letter>[a-z])(?P<digit>[0-9])"
In [116]: s.str.extract(two_groups, expand=True)
Out[116]:
letter digit
A a 1
B b 1
C c 1
the extractall method returns every match. The result of
extractall is always a DataFrame with a MultiIndex on its
rows. The last level of the MultiIndex is named match and
indicates the order in the subject.
In [117]: s.str.extractall(two_groups)
Out[117]:
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 c 1
When each subject string in the Series has exactly one match,
In [118]: s = pd.Series(["a3", "b3", "c2"], dtype="string")
In [119]: s
Out[119]:
0 a3
1 b3
2 c2
dtype: string
then extractall(pat).xs(0, level='match') gives the same result as
extract(pat).
In [120]: extract_result = s.str.extract(two_groups, expand=True)
In [121]: extract_result
Out[121]:
letter digit
0 a 3
1 b 3
2 c 2
In [122]: extractall_result = s.str.extractall(two_groups)
In [123]: extractall_result
Out[123]:
letter digit
match
0 0 a 3
1 0 b 3
2 0 c 2
In [124]: extractall_result.xs(0, level="match")
Out[124]:
letter digit
0 a 3
1 b 3
2 c 2
Index also supports .str.extractall. It returns a DataFrame which has the
same result as a Series.str.extractall with a default index (starts from 0).
In [125]: pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)
Out[125]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
In [126]: pd.Series(["a1a2", "b1", "c1"], dtype="string").str.extractall(two_groups)
Out[126]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
Testing for strings that match or contain a pattern#
You can check whether elements contain a pattern:
In [127]: pattern = r"[0-9][a-z]"
In [128]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.contains(pattern)
.....:
Out[128]:
0 False
1 False
2 True
3 True
4 True
5 True
dtype: boolean
Or whether elements match a pattern:
In [129]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.match(pattern)
.....:
Out[129]:
0 False
1 False
2 True
3 True
4 False
5 True
dtype: boolean
New in version 1.1.0.
In [130]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.fullmatch(pattern)
.....:
Out[130]:
0 False
1 False
2 True
3 True
4 False
5 False
dtype: boolean
Note
The distinction between match, fullmatch, and contains is strictness:
fullmatch tests whether the entire string matches the regular expression;
match tests whether there is a match of the regular expression that begins
at the first character of the string; and contains tests whether there is
a match of the regular expression at any position within the string.
The corresponding functions in the re package for these three match modes are
re.fullmatch,
re.match, and
re.search,
respectively.
Methods like match, fullmatch, contains, startswith, and
endswith take an extra na argument so missing values can be considered
True or False:
In [131]: s4 = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [132]: s4.str.contains("A", na=False)
Out[132]:
0 True
1 False
2 False
3 True
4 False
5 False
6 True
7 False
8 False
dtype: boolean
Creating indicator variables#
You can extract dummy variables from string columns.
For example if they are separated by a '|':
In [133]: s = pd.Series(["a", "a|b", np.nan, "a|c"], dtype="string")
In [134]: s.str.get_dummies(sep="|")
Out[134]:
a b c
0 1 0 0
1 1 1 0
2 0 0 0
3 1 0 1
String Index also supports get_dummies which returns a MultiIndex.
In [135]: idx = pd.Index(["a", "a|b", np.nan, "a|c"])
In [136]: idx.str.get_dummies(sep="|")
Out[136]:
MultiIndex([(1, 0, 0),
(1, 1, 0),
(0, 0, 0),
(1, 0, 1)],
names=['a', 'b', 'c'])
See also get_dummies().
Method summary#
Method
Description
cat()
Concatenate strings
split()
Split strings on delimiter
rsplit()
Split strings on delimiter working from the end of the string
get()
Index into each element (retrieve i-th element)
join()
Join strings in each element of the Series with passed separator
get_dummies()
Split strings on the delimiter returning DataFrame of dummy variables
contains()
Return boolean array if each string contains pattern/regex
replace()
Replace occurrences of pattern/regex/string with some other string or the return value of a callable given the occurrence
removeprefix()
Remove prefix from string, i.e. only remove if string starts with prefix.
removesuffix()
Remove suffix from string, i.e. only remove if string ends with suffix.
repeat()
Duplicate values (s.str.repeat(3) equivalent to x * 3)
pad()
Add whitespace to left, right, or both sides of strings
center()
Equivalent to str.center
ljust()
Equivalent to str.ljust
rjust()
Equivalent to str.rjust
zfill()
Equivalent to str.zfill
wrap()
Split long strings into lines with length less than a given width
slice()
Slice each string in the Series
slice_replace()
Replace slice in each string with passed value
count()
Count occurrences of pattern
startswith()
Equivalent to str.startswith(pat) for each element
endswith()
Equivalent to str.endswith(pat) for each element
findall()
Compute list of all occurrences of pattern/regex for each string
match()
Call re.match on each element, returning matched groups as list
extract()
Call re.search on each element, returning DataFrame with one row for each element and one column for each regex capture group
extractall()
Call re.findall on each element, returning DataFrame with one row for each match and one column for each regex capture group
len()
Compute string lengths
strip()
Equivalent to str.strip
rstrip()
Equivalent to str.rstrip
lstrip()
Equivalent to str.lstrip
partition()
Equivalent to str.partition
rpartition()
Equivalent to str.rpartition
lower()
Equivalent to str.lower
casefold()
Equivalent to str.casefold
upper()
Equivalent to str.upper
find()
Equivalent to str.find
rfind()
Equivalent to str.rfind
index()
Equivalent to str.index
rindex()
Equivalent to str.rindex
capitalize()
Equivalent to str.capitalize
swapcase()
Equivalent to str.swapcase
normalize()
Return Unicode normal form. Equivalent to unicodedata.normalize
translate()
Equivalent to str.translate
isalnum()
Equivalent to str.isalnum
isalpha()
Equivalent to str.isalpha
isdigit()
Equivalent to str.isdigit
isspace()
Equivalent to str.isspace
islower()
Equivalent to str.islower
isupper()
Equivalent to str.isupper
istitle()
Equivalent to str.istitle
isnumeric()
Equivalent to str.isnumeric
isdecimal()
Equivalent to str.isdecimal
| 873
| 1,044
|
'|'.join is not working for a list of string with more that one item
I have the following list of strings and the code:
mylist_bus = ["AAG","BOS","Ext"]
df.loc[df['lineId_EOD'].str.contains('AAG')]
with the following results:
ActivityType_EOD lineId_EOD
leg AAG_line7
leg AAG_line50
Then I want to add one more column for these specific values:
for i, row in df.iterrows():
if '|'.join(mylist_bus) in df.loc[i, "lineId_EOD"]:
df.loc[i,"category_EOD"] = "bus"
df.loc[df["lineId_EOD"].str.contains('AAG')]
However, the result is the same as before, and nothing changes:
ActivityType_EOD lineId_EOD
leg AAG_line7
leg AAG_line50
when I reduce the list to just one String, for example:
mylist_bus = ["AAG"]
then it works fine and I have the results:
ActivityType_EOD lineId_EOD category_EOD
leg AAG_line7 bus
leg AAG_line50 bus
but I need to have a list of more than one string.
|
67,303,440
|
In Pandas, how do I update the previous row in a iterator?
|
<p>I am trying to iterate a dataframe and update the previous row on each iteration. My index are date based.</p>
<pre><code>for i, row in df.iterrows():
# do my calculations here
myValue = row['myCol']
prev = row.shift()
prev['myCol'] = prev['myCol'] - myValue
</code></pre>
<p>There is no error but it does not save, how do I save this?</p>
| 67,305,191
| 2021-04-28T15:41:36.123000
| 1
| null | 0
| 94
|
python|pandas
|
<p>Without example data, it's unclear what you're trying. But using the operations in your <code>for</code> loop, it could <em>probably</em> be done like this instead, without any loop:</p>
<pre class="lang-py prettyprint-override"><code>myValue = df['myCol'] # the column you wanted and other calculations
df['myCol'] = df['myCol'].shift() - myValue
</code></pre>
<p>Depending on what you're trying, one of these should be what you want:</p>
<pre class="lang-py prettyprint-override"><code># starting with this df
myCol otherCol
0 2 6
1 9 3
2 4 8
3 2 8
4 1 7
# next row minus current row
df['myCol'] = df['myCol'].shift(-1) - df['myCol']
df
# result:
myCol otherCol
0 7.0 6
1 -5.0 3
2 -2.0 8
3 -1.0 8
4 NaN 7
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code># previous row minus current row
df['myCol'] = df['myCol'].shift() - df['myCol']
df
# result:
myCol otherCol
0 NaN 6
1 -7.0 3
2 5.0 8
3 2.0 8
4 1.0 7
</code></pre>
<p>And <code>myVal</code> can be anything, like some mathematical operations vectorised over an entire column:</p>
<pre class="lang-py prettyprint-override"><code>myVal = df['myCol'] * 2 + 3
# myVal is:
0 7
1 21
2 11
3 7
4 5
Name: myCol, dtype: int32
df['myCol'] = df['myCol'].shift(-1) - myVal
df
myCol otherCol
0 2.0 6
1 -17.0 3
2 -9.0 8
3 -6.0 8
4 NaN 7
</code></pre>
| 2021-04-28T17:34:56.423000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.iterrows.html
|
Without example data, it's unclear what you're trying. But using the operations in your for loop, it could probably be done like this instead, without any loop:
myValue = df['myCol'] # the column you wanted and other calculations
df['myCol'] = df['myCol'].shift() - myValue
Depending on what you're trying, one of these should be what you want:
# starting with this df
myCol otherCol
0 2 6
1 9 3
2 4 8
3 2 8
4 1 7
# next row minus current row
df['myCol'] = df['myCol'].shift(-1) - df['myCol']
df
# result:
myCol otherCol
0 7.0 6
1 -5.0 3
2 -2.0 8
3 -1.0 8
4 NaN 7
or
# previous row minus current row
df['myCol'] = df['myCol'].shift() - df['myCol']
df
# result:
myCol otherCol
0 NaN 6
1 -7.0 3
2 5.0 8
3 2.0 8
4 1.0 7
And myVal can be anything, like some mathematical operations vectorised over an entire column:
myVal = df['myCol'] * 2 + 3
# myVal is:
0 7
1 21
2 11
3 7
4 5
Name: myCol, dtype: int32
df['myCol'] = df['myCol'].shift(-1) - myVal
df
myCol otherCol
0 2.0 6
1 -17.0 3
2 -9.0 8
3 -6.0 8
4 NaN 7
| 0
| 1,267
|
In Pandas, how do I update the previous row in a iterator?
I am trying to iterate a dataframe and update the previous row on each iteration. My index are date based.
for i, row in df.iterrows():
# do my calculations here
myValue = row['myCol']
prev = row.shift()
prev['myCol'] = prev['myCol'] - myValue
There is no error but it does not save, how do I save this?
|
69,024,359
|
Pandas groupby aggregate list
|
<p>I have this data frame</p>
<pre><code>df = pd.DataFrame({'upc':[1,1,1],'store':[1,2,3],'date':['jan','jan','jan'],'pred':[[1,1,1],[2,2,2],[3,3,3]],'act':[[4,4,4],[5,5,5],[6,6,6]]})
</code></pre>
<p>looks like this</p>
<pre><code> upc store date pred act
0 1 1 jan [1, 1, 1] [4, 4, 4]
1 1 2 jan [2, 2, 2] [5, 5, 5]
2 1 3 jan [3, 3, 3] [6, 6, 6]
</code></pre>
<p>When I do groupby and agg along <code>store</code> for <code>pred</code> and <code>act</code></p>
<pre><code>df.groupby(by = ["upc","date"]).agg({"pred":"sum","act":"sum"})
</code></pre>
<p>I get all the list concatenated</p>
<pre><code> pred act
upc date
1 jan [1, 1, 1, 2, 2, 2, 3, 3, 3] [4, 4, 4, 5, 5, 5, 6, 6, 6]
</code></pre>
<p>I want the sum of the list element-wise something like this</p>
<pre><code> upc date pred act
0 1 jan [6, 6, 6] [15, 15, 15]
</code></pre>
| 69,024,448
| 2021-09-02T04:57:29.857000
| 2
| null | 1
| 96
|
python|pandas
|
<p>Try this with:</p>
<pre><code>>>> df.groupby(['upc', 'date'], as_index=False).agg({"pred": lambda x: pd.DataFrame(x.values.tolist()).sum().tolist(), "act": lambda x: pd.DataFrame(x.values.tolist()).sum().tolist()})
upc date pred act
0 1 jan [6, 6, 6] [15, 15, 15]
>>>
</code></pre>
| 2021-09-02T05:09:36.890000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.aggregate.html
|
pandas.core.groupby.DataFrameGroupBy.aggregate#
pandas.core.groupby.DataFrameGroupBy.aggregate#
DataFrameGroupBy.aggregate(func=None, *args, engine=None, engine_kwargs=None, **kwargs)[source]#
Aggregate using one or more operations over the specified axis.
Parameters
funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either
Try this with:
>>> df.groupby(['upc', 'date'], as_index=False).agg({"pred": lambda x: pd.DataFrame(x.values.tolist()).sum().tolist(), "act": lambda x: pd.DataFrame(x.values.tolist()).sum().tolist()})
upc date pred act
0 1 jan [6, 6, 6] [15, 15, 15]
>>>
work when passed a DataFrame or when passed to DataFrame.apply.
Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, 'mean']
dict of axis labels -> functions, function names or list of such.
Can also accept a Numba JIT function with
engine='numba' specified. Only passing a single function is supported
with this engine.
If the 'numba' engine is chosen, the function must be
a user defined function with values and index as the
first and second arguments respectively in the function signature.
Each group’s index will be passed to the user defined function
and optionally available for use.
Changed in version 1.1.0.
*argsPositional arguments to pass to func.
enginestr, default None
'cython' : Runs the function through C-extensions from cython.
'numba' : Runs the function through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.1.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False} and will be
applied to the function
New in version 1.1.0.
**kwargsKeyword arguments to be passed into func.
Returns
DataFrame
See also
DataFrame.groupby.applyApply function func group-wise and combine the results together.
DataFrame.groupby.transformAggregate using one or more operations over the specified axis.
DataFrame.aggregateTransforms the Series on each group based on the given function.
Notes
When using engine='numba', there will be no “fall back” behavior internally.
The group data and group index will be passed as numpy arrays to the JITed
user defined function, and no alternative execution attempts will be tried.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func,
see the examples below.
Examples
>>> df = pd.DataFrame(
... {
... "A": [1, 1, 2, 2],
... "B": [1, 2, 3, 4],
... "C": [0.362838, 0.227877, 1.267767, -0.562860],
... }
... )
>>> df
A B C
0 1 1 0.362838
1 1 2 0.227877
2 2 3 1.267767
3 2 4 -0.562860
The aggregation is for each column.
>>> df.groupby('A').agg('min')
B C
A
1 1 0.227877
2 3 -0.562860
Multiple aggregations
>>> df.groupby('A').agg(['min', 'max'])
B C
min max min max
A
1 1 2 0.227877 0.362838
2 3 4 -0.562860 1.267767
Select a column for aggregation
>>> df.groupby('A').B.agg(['min', 'max'])
min max
A
1 1 2
2 3 4
User-defined function for aggregation
>>> df.groupby('A').agg(lambda x: sum(x) + 2)
B C
A
1 5 2.590715
2 9 2.704907
Different aggregations per column
>>> df.groupby('A').agg({'B': ['min', 'max'], 'C': 'sum'})
B C
min max sum
A
1 1 2 0.590715
2 3 4 0.704907
To control the output names with different aggregations per column,
pandas supports “named aggregation”
>>> df.groupby("A").agg(
... b_min=pd.NamedAgg(column="B", aggfunc="min"),
... c_sum=pd.NamedAgg(column="C", aggfunc="sum"))
b_min c_sum
A
1 1 0.590715
2 3 0.704907
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column.
Pandas provides the pandas.NamedAgg namedtuple with the fields
['column', 'aggfunc'] to make it clearer what the arguments are.
As usual, the aggregation can be a callable or a string alias.
See Named aggregation for more.
Changed in version 1.3.0: The resulting dtype will reflect the return value of the aggregating function.
>>> df.groupby("A")[["B"]].agg(lambda x: x.astype(float).min())
B
A
1 1.0
2 3.0
| 374
| 653
|
Pandas groupby aggregate list
I have this data frame
df = pd.DataFrame({'upc':[1,1,1],'store':[1,2,3],'date':['jan','jan','jan'],'pred':[[1,1,1],[2,2,2],[3,3,3]],'act':[[4,4,4],[5,5,5],[6,6,6]]})
looks like this
upc store date pred act
0 1 1 jan [1, 1, 1] [4, 4, 4]
1 1 2 jan [2, 2, 2] [5, 5, 5]
2 1 3 jan [3, 3, 3] [6, 6, 6]
When I do groupby and agg along store for pred and act
df.groupby(by = ["upc","date"]).agg({"pred":"sum","act":"sum"})
I get all the list concatenated
pred act
upc date
1 jan [1, 1, 1, 2, 2, 2, 3, 3, 3] [4, 4, 4, 5, 5, 5, 6, 6, 6]
I want the sum of the list element-wise something like this
upc date pred act
0 1 jan [6, 6, 6] [15, 15, 15]
|
70,347,512
|
Convert a GroupBy object with no aggregation function into a dataframe while keeping the grouping
|
<p>Say I have the following dataframe <code>df</code></p>
<pre><code>gender | country | age
-------+---------+-----
M DK 20
M ES 19
M ES 21
M DK 30
F DK 20
F ES 19
F ES 21
F DK 12
</code></pre>
<p>Is there a way to get the Grouped data as a dataframe e.g</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df_grp = df.group_by(["gender","country"])["age"] #Group it
df_grouped_df = pd.DataFrame(df_grp) #Convert to ordinary dataframe
print(df_grouped_df)
age
gender country
------+----------+
20
M DK 30
-------------
ES 19
21
---------------------
DK 20
F 30
------------
ES 19
21
</code></pre>
<p>EDIT:</p>
<p>Using <code>set_index</code> does not seem to work, since it does not group the second columns specified (e.g <code>country</code>);</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({"gender":["M"]*4+["F"]*4,"country":["DK","ES","ES","DK"]*2,"age":[1,2,3,4,5,6,7,8]})
gender country age
M DK 1
M ES 2
M ES 3
M DK 4
F DK 5
F ES 6
F ES 7
F DK 8
df.set_index(["gender","country"])
age
gender country
M DK 1
ES 2
ES 3
DK 4
F DK 5
ES 6
ES 7
DK 8
</code></pre>
<p>As noted it is not grouped for each <code>country</code> as required</p>
| 70,347,750
| 2021-12-14T10:40:59.153000
| 2
| null | 1
| 102
|
python|pandas
|
<p>Probably not clean but has the visual aspect:</p>
<pre><code>out = df.sort_values(['gender', 'country'], ascending=[False, True]).reset_index(drop=True)
out.loc[out.duplicated(['gender', 'country']), 'country'] = ''
out.loc[out.duplicated('gender'), 'gender'] = ''
print(out.set_index(['gender', 'country'])
# Output:
age
gender country
M DK 20
30
ES 19
21
F DK 20
12
ES 19
21
</code></pre>
| 2021-12-14T10:59:06.147000
| 1
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
Probably not clean but has the visual aspect:
out = df.sort_values(['gender', 'country'], ascending=[False, True]).reset_index(drop=True)
out.loc[out.duplicated(['gender', 'country']), 'country'] = ''
out.loc[out.duplicated('gender'), 'gender'] = ''
print(out.set_index(['gender', 'country'])
# Output:
age
gender country
M DK 20
30
ES 19
21
F DK 20
12
ES 19
21
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 459
| 963
|
Convert a GroupBy object with no aggregation function into a dataframe while keeping the grouping
Say I have the following dataframe df
gender | country | age
-------+---------+-----
M DK 20
M ES 19
M ES 21
M DK 30
F DK 20
F ES 19
F ES 21
F DK 12
Is there a way to get the Grouped data as a dataframe e.g
import pandas as pd
df_grp = df.group_by(["gender","country"])["age"] #Group it
df_grouped_df = pd.DataFrame(df_grp) #Convert to ordinary dataframe
print(df_grouped_df)
age
gender country
------+----------+
20
M DK 30
-------------
ES 19
21
---------------------
DK 20
F 30
------------
ES 19
21
EDIT:
Using set_index does not seem to work, since it does not group the second columns specified (e.g country);
df = pd.DataFrame({"gender":["M"]*4+["F"]*4,"country":["DK","ES","ES","DK"]*2,"age":[1,2,3,4,5,6,7,8]})
gender country age
M DK 1
M ES 2
M ES 3
M DK 4
F DK 5
F ES 6
F ES 7
F DK 8
df.set_index(["gender","country"])
age
gender country
M DK 1
ES 2
ES 3
DK 4
F DK 5
ES 6
ES 7
DK 8
As noted it is not grouped for each country as required
|
68,546,936
|
How to sum column in reverse order in pandas with groupby
|
<p>I currently need to replicate this dataset where I must groupby subjectID copy and count how many have a score of 1 in the future. I must count them in reverse basically however I'm not sure how to do that and groupby subject ID at the same time.</p>
<pre><code> SubjectID copy Score Number of All Future Hosp for O column
0 phchp003 1 4
1 phchp003 1 3
2 phchp003 1 2
3 phchp003 1 1
4 phchp003 1 0
5 phchp004 1 4
6 phchp004 1 3
7 phchp004 1 2
8 phchp004 1 1
9 phchp004 1 0
10 phchp006 0 3
11 phchp006 0 3
12 phchp006 0 3
13 phchp006 0 3
14 phchp006 1 2
15 phchp006 1 1
16 phchp006 1 0
</code></pre>
<p>I currently have</p>
<pre><code>data['Sum']= data.groupby(['SubjectID copy'])['Score'].cumsum()
</code></pre>
<p>which gives me the values but summed in descending order, where I need mine to go from bottom up.</p>
| 68,547,056
| 2021-07-27T14:39:15.687000
| 2
| null | 0
| 102
|
python|pandas
|
<p>We can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html#pandas-dataframe-loc" rel="nofollow noreferrer"><code>loc</code></a> to reverse before using <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>groupby transform</code></a>. We can then use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.shift.html#pandas-series-shift" rel="nofollow noreferrer"><code>shift</code></a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>cumsum</code></a> to only consider "future" values:</p>
<pre><code>data['Sum'] = (
data.loc[::-1] # Reverse DataFrame
.groupby(['SubjectID copy'])['Score'] # Groups
.transform(lambda s: s.shift(fill_value=0).cumsum()) # handle transformation
)
</code></pre>
<p><code>data</code>:</p>
<pre><code> SubjectID copy Score Sum
0 phchp003 1 4
1 phchp003 1 3
2 phchp003 1 2
3 phchp003 1 1
4 phchp003 1 0
5 phchp004 1 4
6 phchp004 1 3
7 phchp004 1 2
8 phchp004 1 1
9 phchp004 1 0
10 phchp006 0 3
11 phchp006 0 3
12 phchp006 0 3
13 phchp006 0 3
14 phchp006 1 2
15 phchp006 1 1
16 phchp006 1 0
</code></pre>
| 2021-07-27T14:47:51.297000
| 1
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
We can use loc to reverse before using groupby transform. We can then use shift and cumsum to only consider "future" values:
data['Sum'] = (
data.loc[::-1] # Reverse DataFrame
.groupby(['SubjectID copy'])['Score'] # Groups
.transform(lambda s: s.shift(fill_value=0).cumsum()) # handle transformation
)
data:
SubjectID copy Score Sum
0 phchp003 1 4
1 phchp003 1 3
2 phchp003 1 2
3 phchp003 1 1
4 phchp003 1 0
5 phchp004 1 4
6 phchp004 1 3
7 phchp004 1 2
8 phchp004 1 1
9 phchp004 1 0
10 phchp006 0 3
11 phchp006 0 3
12 phchp006 0 3
13 phchp006 0 3
14 phchp006 1 2
15 phchp006 1 1
16 phchp006 1 0
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 147
| 1,019
|
How to sum column in reverse order in pandas with groupby
I currently need to replicate this dataset where I must groupby subjectID copy and count how many have a score of 1 in the future. I must count them in reverse basically however I'm not sure how to do that and groupby subject ID at the same time.
SubjectID copy Score Number of All Future Hosp for O column
0 phchp003 1 4
1 phchp003 1 3
2 phchp003 1 2
3 phchp003 1 1
4 phchp003 1 0
5 phchp004 1 4
6 phchp004 1 3
7 phchp004 1 2
8 phchp004 1 1
9 phchp004 1 0
10 phchp006 0 3
11 phchp006 0 3
12 phchp006 0 3
13 phchp006 0 3
14 phchp006 1 2
15 phchp006 1 1
16 phchp006 1 0
I currently have
data['Sum']= data.groupby(['SubjectID copy'])['Score'].cumsum()
which gives me the values but summed in descending order, where I need mine to go from bottom up.
|
64,711,039
|
background_gradient by dataframe not by column or row
|
<p>This is very close to what I need:</p>
<pre><code>index= ['aaa', 'bbb', 'ccc', 'ddd', 'eee']
cols = ['A', 'B', 'C', 'D']
df = pd.DataFrame(np.random.randn(5, 4), index=index, columns=cols)
df.style.background_gradient(cmap='Reds')
</code></pre>
<p>The only issue is that the colour shading is based on ranking of values by column. I would like to apply it to all numbers in the frame. So the largest x% in the dataframe should get the darkest red. Like this, if you look at the frame as a whole, numbers share the same colour that are very different in terms of magnitude.</p>
| 64,711,188
| 2020-11-06T08:19:42.667000
| 1
| null | 0
| 105
|
python|pandas
|
<p>I believe you're looking for the <code>axis=None</code> argument:</p>
<pre><code>index= ['aaa', 'bbb', 'ccc', 'ddd', 'eee']
cols = ['A', 'B', 'C', 'D']
df = pd.DataFrame(np.random.randn(5, 4), index=index, columns=cols)
df["B"] *= 10
df.style.background_gradient(cmap='Reds', axis=None)
</code></pre>
<p><a href="https://i.stack.imgur.com/R8K7e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R8K7e.png" alt="enter image description here" /></a></p>
| 2020-11-06T08:31:06.623000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.background_gradient.html
|
pandas.io.formats.style.Styler.background_gradient#
pandas.io.formats.style.Styler.background_gradient#
Styler.background_gradient(cmap='PuBu', low=0, high=0, axis=0, subset=None, text_color_threshold=0.408, vmin=None, vmax=None, gmap=None)[source]#
Color the background in a gradient style.
The background color is determined according
to the data in each column, row or frame, or by a given
gradient map. Requires matplotlib.
Parameters
cmapstr or colormapMatplotlib colormap.
lowfloatCompress the color range at the low end. This is a multiple of the data
range to extend below the minimum; good values usually in [0, 1],
I believe you're looking for the axis=None argument:
index= ['aaa', 'bbb', 'ccc', 'ddd', 'eee']
cols = ['A', 'B', 'C', 'D']
df = pd.DataFrame(np.random.randn(5, 4), index=index, columns=cols)
df["B"] *= 10
df.style.background_gradient(cmap='Reds', axis=None)
defaults to 0.
highfloatCompress the color range at the high end. This is a multiple of the data
range to extend above the maximum; good values usually in [0, 1],
defaults to 0.
axis{0, 1, “index”, “columns”, None}, default 0Apply to each column (axis=0 or 'index'), to each row
(axis=1 or 'columns'), or to the entire DataFrame at once
with axis=None.
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
text_color_thresholdfloat or intLuminance threshold for determining text color in [0, 1]. Facilitates text
visibility across varying background colors. All text is dark if 0, and
light if 1, defaults to 0.408.
vminfloat, optionalMinimum data value that corresponds to colormap minimum value.
If not specified the minimum value of the data (or gmap) will be used.
New in version 1.0.0.
vmaxfloat, optionalMaximum data value that corresponds to colormap maximum value.
If not specified the maximum value of the data (or gmap) will be used.
New in version 1.0.0.
gmaparray-like, optionalGradient map for determining the background colors. If not supplied
will use the underlying data from rows, columns or frame. If given as an
ndarray or list-like must be an identical shape to the underlying data
considering axis and subset. If given as DataFrame or Series must
have same index and column labels considering axis and subset.
If supplied, vmin and vmax should be given relative to this
gradient map.
New in version 1.3.0.
Returns
selfStyler
See also
Styler.text_gradientColor the text in a gradient style.
Notes
When using low and high the range
of the gradient, given by the data if gmap is not given or by gmap,
is extended at the low end effectively by
map.min - low * map.range and at the high end by
map.max + high * map.range before the colors are normalized and determined.
If combining with vmin and vmax the map.min, map.max and
map.range are replaced by values according to the values derived from
vmin and vmax.
This method will preselect numeric columns and ignore non-numeric columns
unless a gmap is supplied in which case no preselection occurs.
Examples
>>> df = pd.DataFrame(columns=["City", "Temp (c)", "Rain (mm)", "Wind (m/s)"],
... data=[["Stockholm", 21.6, 5.0, 3.2],
... ["Oslo", 22.4, 13.3, 3.1],
... ["Copenhagen", 24.5, 0.0, 6.7]])
Shading the values column-wise, with axis=0, preselecting numeric columns
>>> df.style.background_gradient(axis=0)
Shading all values collectively using axis=None
>>> df.style.background_gradient(axis=None)
Compress the color map from the both low and high ends
>>> df.style.background_gradient(axis=None, low=0.75, high=1.0)
Manually setting vmin and vmax gradient thresholds
>>> df.style.background_gradient(axis=None, vmin=6.7, vmax=21.6)
Setting a gmap and applying to all columns with another cmap
>>> df.style.background_gradient(axis=0, gmap=df['Temp (c)'], cmap='YlOrRd')
...
Setting the gradient map for a dataframe (i.e. axis=None), we need to
explicitly state subset to match the gmap shape
>>> gmap = np.array([[1,2,3], [2,3,4], [3,4,5]])
>>> df.style.background_gradient(axis=None, gmap=gmap,
... cmap='YlOrRd', subset=['Temp (c)', 'Rain (mm)', 'Wind (m/s)']
... )
| 632
| 894
|
background_gradient by dataframe not by column or row
This is very close to what I need:
index= ['aaa', 'bbb', 'ccc', 'ddd', 'eee']
cols = ['A', 'B', 'C', 'D']
df = pd.DataFrame(np.random.randn(5, 4), index=index, columns=cols)
df.style.background_gradient(cmap='Reds')
The only issue is that the colour shading is based on ranking of values by column. I would like to apply it to all numbers in the frame. So the largest x% in the dataframe should get the darkest red. Like this, if you look at the frame as a whole, numbers share the same colour that are very different in terms of magnitude.
|
70,396,087
|
Using lambda function to split a column in a Pandas dataset
|
<p>I am doing a Udemy tutorial on data analysis and machine learning and I have come across an issue that I am not able to understand fully.</p>
<p>The dataset being used is available on Kaggle and is called 911.csv.</p>
<p>I am supposed to Create a new feature
** In the titles column there are "Reasons/Departments" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called "Reason" that contains this string value.**</p>
<p>*For example, if the title column value is EMS: BACK PAINS/INJURY, the Reason column value would be EMS. *</p>
<p>when I do this for the first row in a column in works</p>
<pre><code>x=df['title'].iloc[0]
x.split(':')[0]
</code></pre>
<p>But the issue is when I do [ <strong>I am trying to do this to remove all the ':' in the 'title' column</strong>]</p>
<pre><code>x = df['title']
x.split(':')
</code></pre>
<p>I get the following error</p>
<blockquote>
<p>AttributeError: 'Series' object has no attribute 'split'</p>
</blockquote>
<p>Can you tell me what am I doing wrong here?</p>
| 70,404,927
| 2021-12-17T16:22:39.830000
| 2
| null | 2
| 881
|
python|pandas
|
<p>I was able to get the code working by doing the following step by step approach :</p>
<pre><code>df['title']
df['title'].iloc[0]
x = df['title'].iloc[0]
x.split(':')
x.split(':')[0]
y = x.split(':')[0]
y
</code></pre>
<p>And then create the lambda function using the variable y created :</p>
<pre><code>df['reason'] = df['title'].apply(lambda y: y.split(':')[0] )
df['reason']
</code></pre>
| 2021-12-18T16:00:45.283000
| 1
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
I was able to get the code working by doing the following step by step approach :
df['title']
df['title'].iloc[0]
x = df['title'].iloc[0]
x.split(':')
x.split(':')[0]
y = x.split(':')[0]
y
And then create the lambda function using the variable y created :
df['reason'] = df['title'].apply(lambda y: y.split(':')[0] )
df['reason']
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 684
| 1,015
|
Using lambda function to split a column in a Pandas dataset
I am doing a Udemy tutorial on data analysis and machine learning and I have come across an issue that I am not able to understand fully.
The dataset being used is available on Kaggle and is called 911.csv.
I am supposed to Create a new feature
** In the titles column there are "Reasons/Departments" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called "Reason" that contains this string value.**
*For example, if the title column value is EMS: BACK PAINS/INJURY, the Reason column value would be EMS. *
when I do this for the first row in a column in works
x=df['title'].iloc[0]
x.split(':')[0]
But the issue is when I do [ I am trying to do this to remove all the ':' in the 'title' column]
x = df['title']
x.split(':')
I get the following error
AttributeError: 'Series' object has no attribute 'split'
Can you tell me what am I doing wrong here?
|
68,857,847
|
How to fix incorrect format in pandas column?
|
<p>I have a pandas dataframe called df1</p>
<pre><code>ID| ACTIVITY | Date
1 | activity 1 | 2/04/2016
2 | activity 2 | 3/04/2015
3 | activity 3 | 7/05/2016
3 | activity 4 | 2/04/2016
4 | activity 3 | 2/04/2017
5 | activity 6 | 2/04/2015
5 | activity 2 | 2/04/2016
6 | activity 1 | 2/04/2018
</code></pre>
<p>i have too create two dataframes out of these one where activity1 is being done in year 2015
and other where activity 2 is being done in year 2015
i am using the following code</p>
<pre><code>for index,rows in df1.iterrows():
if rows['Date'].year == 2015 :
if rows['ACTIVITY'] == 'activity 1' :
data1 = rows['ID','Date']
answer1 = answer1.append(data1)
if rows['ACTIVITY'] == 'activity 2' :
data2 = rows['ID','Date']
answer2 = answer2.append(data2)
</code></pre>
<p>this is giving following result</p>
<pre><code>activity1
Index | 0
0 | Id1
1 | Date1
0 | ID2
1 | Date2
0 | ID3
1 | Date 3 corresponding to that id
</code></pre>
<p>similarly</p>
<pre><code>activity2
Index | 0
0 | Id1
1 | Date1
0 | ID2
1 | Date2
0 | ID3
1 | Date 3 corresponding to that id
</code></pre>
<p>what i want is</p>
<pre><code> activity1
ID | Date
ID1 | Date1
</code></pre>
<p>How to do this?</p>
| 68,857,862
| 2021-08-20T06:32:39.193000
| 1
| null | 1
| 115
|
python|pandas
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> for filter by conditions and columns names:</p>
<pre><code>mask = (df['Date'].dt.year == 2015)
df1 = df.loc[mask & (df['ACTIVITY'] == 'activity 1'), ['ID','Date']]
df2 = df.loc[mask & (df['ACTIVITY'] == 'activity 2'), ['ID','Date']]
</code></pre>
<p>Your solution is not recommend, check <a href="https://stackoverflow.com/a/55557758/2901002">this</a>.</p>
<p>But is possible by:</p>
<pre><code>df1['Date'] = pd.to_datetime(df1['Date'])
answer1, answer2 = pd.DataFrame(), pd.DataFrame()
for index,rows in df1.iterrows():
if rows['Date'].year == 2015 :
if rows['activity'] == 'activity 1' :
data1 = rows[['ID','Date']]
answer1 = answer1.append(data1)
if rows['activity'] == 'activity 2' :
data2 = rows[['ID','Date']]
answer2 = answer2.append(data2)
</code></pre>
| 2021-08-20T06:34:43.647000
| 1
|
https://pandas.pydata.org/docs/_sources/whatsnew/v1.4.2.rst.txt
|
Use DataFrame.loc for filter by conditions and columns names:
mask = (df['Date'].dt.year == 2015)
df1 = df.loc[mask & (df['ACTIVITY'] == 'activity 1'), ['ID','Date']]
df2 = df.loc[mask & (df['ACTIVITY'] == 'activity 2'), ['ID','Date']]
Your solution is not recommend, check this.
But is possible by:
df1['Date'] = pd.to_datetime(df1['Date'])
answer1, answer2 = pd.DataFrame(), pd.DataFrame()
for index,rows in df1.iterrows():
if rows['Date'].year == 2015 :
if rows['activity'] == 'activity 1' :
data1 = rows[['ID','Date']]
answer1 = answer1.append(data1)
if rows['activity'] == 'activity 2' :
data2 = rows[['ID','Date']]
answer2 = answer2.append(data2)
| 0
| 714
|
How to fix incorrect format in pandas column?
I have a pandas dataframe called df1
ID| ACTIVITY | Date
1 | activity 1 | 2/04/2016
2 | activity 2 | 3/04/2015
3 | activity 3 | 7/05/2016
3 | activity 4 | 2/04/2016
4 | activity 3 | 2/04/2017
5 | activity 6 | 2/04/2015
5 | activity 2 | 2/04/2016
6 | activity 1 | 2/04/2018
i have too create two dataframes out of these one where activity1 is being done in year 2015
and other where activity 2 is being done in year 2015
i am using the following code
for index,rows in df1.iterrows():
if rows['Date'].year == 2015 :
if rows['ACTIVITY'] == 'activity 1' :
data1 = rows['ID','Date']
answer1 = answer1.append(data1)
if rows['ACTIVITY'] == 'activity 2' :
data2 = rows['ID','Date']
answer2 = answer2.append(data2)
this is giving following result
activity1
Index | 0
0 | Id1
1 | Date1
0 | ID2
1 | Date2
0 | ID3
1 | Date 3 corresponding to that id
similarly
activity2
Index | 0
0 | Id1
1 | Date1
0 | ID2
1 | Date2
0 | ID3
1 | Date 3 corresponding to that id
what i want is
activity1
ID | Date
ID1 | Date1
How to do this?
|
66,001,096
|
Pandas Column join list values
|
<p>Using Python, Pandas, I have a couple columns that have lists in them, I want to convert the list to a string.</p>
<p>Example:I have the following values in a record, in the field ev_connector_types</p>
<p>[u'ACME', u'QUICK_CONNECT'] I'd like the value for that record to show: ACME, QUICK_CONNECT, without the list brackets, the 'u', and the quotes.</p>
<p>I've tried.</p>
<pre><code>df.loc[df["ev_connector_types"].isnull(), "ev_connector_types"] = ", ".join(df["ev_connector_types"])
df["ev_connector_types"] = df["ev_connector_types"].agg(' '.join)
df["ev_connector_types"] = df["ev_connector_types"].apply(''.join)
df["ev_connector_types"] = df["ev_connector_types"].str.join(" ")
</code></pre>
<p>But I'm not getting anywhere.</p>
<p>Basically do this:</p>
<pre><code>myList = [u'ACME', u'QUICK_CONNECT']
x = " ".join(myList)
</code></pre>
<p>Within Pandas.</p>
<p>After much trial and error, I found my field value was just a bunch of string characters like "[u'ACME', u'QUICK_CONNECT']" and the join was doing this: [ u' A C M E ' , u ' Q U I C K _ C O N N E C T ' ] If I split it on the comma, I got a list and the answer below worked. However I ended up doing a lsplit and rsplit and a replace to get what I wanted.</p>
<p>This line was the problem, it took the field type from a list to a string:
df["ev_connector_types"] = df["ev_connector_types"].astype('str')</p>
| 66,012,201
| 2021-02-01T22:33:07.653000
| 1
| null | -2
| 371
|
python|pandas
|
<p>It's as simple as you have stated. Just do within <code>apply()</code></p>
<pre><code>import pandas as pd
df = pd.DataFrame([{"ev_connector_types":[u'ACME', u'QUICK_CONNECT']}])
df.assign(ev_connector_types=df["ev_connector_types"].apply(lambda l: " ".join(l)))
</code></pre>
| 2021-02-02T14:56:46.560000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.str.join.html
|
pandas.Series.str.join#
pandas.Series.str.join#
Series.str.join(sep)[source]#
Join lists contained as elements in the Series/Index with passed delimiter.
If the elements of a Series are lists themselves, join the content of these
lists using the delimiter passed to the function.
This function is an equivalent to str.join().
Parameters
sepstrDelimiter to use between list entries.
Returns
Series/Index: objectThe list entries concatenated by intervening occurrences of the
delimiter.
Raises
AttributeErrorIf the supplied Series contains neither strings nor lists.
See also
str.joinStandard library version of this method.
Series.str.splitSplit strings around given separator/delimiter.
Notes
If any of the list items is not a string object, the result of the join
It's as simple as you have stated. Just do within apply()
import pandas as pd
df = pd.DataFrame([{"ev_connector_types":[u'ACME', u'QUICK_CONNECT']}])
df.assign(ev_connector_types=df["ev_connector_types"].apply(lambda l: " ".join(l)))
will be NaN.
Examples
Example with a list that contains non-string elements.
>>> s = pd.Series([['lion', 'elephant', 'zebra'],
... [1.1, 2.2, 3.3],
... ['cat', np.nan, 'dog'],
... ['cow', 4.5, 'goat'],
... ['duck', ['swan', 'fish'], 'guppy']])
>>> s
0 [lion, elephant, zebra]
1 [1.1, 2.2, 3.3]
2 [cat, nan, dog]
3 [cow, 4.5, goat]
4 [duck, [swan, fish], guppy]
dtype: object
Join all lists using a ‘-’. The lists containing object(s) of types other
than str will produce a NaN.
>>> s.str.join('-')
0 lion-elephant-zebra
1 NaN
2 NaN
3 NaN
4 NaN
dtype: object
| 789
| 1,025
|
Pandas Column join list values
Using Python, Pandas, I have a couple columns that have lists in them, I want to convert the list to a string.
Example:I have the following values in a record, in the field ev_connector_types
[u'ACME', u'QUICK_CONNECT'] I'd like the value for that record to show: ACME, QUICK_CONNECT, without the list brackets, the 'u', and the quotes.
I've tried.
df.loc[df["ev_connector_types"].isnull(), "ev_connector_types"] = ", ".join(df["ev_connector_types"])
df["ev_connector_types"] = df["ev_connector_types"].agg(' '.join)
df["ev_connector_types"] = df["ev_connector_types"].apply(''.join)
df["ev_connector_types"] = df["ev_connector_types"].str.join(" ")
But I'm not getting anywhere.
Basically do this:
myList = [u'ACME', u'QUICK_CONNECT']
x = " ".join(myList)
Within Pandas.
After much trial and error, I found my field value was just a bunch of string characters like "[u'ACME', u'QUICK_CONNECT']" and the join was doing this: [ u' A C M E ' , u ' Q U I C K _ C O N N E C T ' ] If I split it on the comma, I got a list and the answer below worked. However I ended up doing a lsplit and rsplit and a replace to get what I wanted.
This line was the problem, it took the field type from a list to a string:
df["ev_connector_types"] = df["ev_connector_types"].astype('str')
|
69,217,329
|
How am I able to 'update' values from one column to another in the same dataframe?
|
<p>I have this dataframe that houses job details and there are columns for Pay, location, Job position, Job Description, other information, Etc.</p>
<p>However sometimes the values in pay might not be present and is available in location for example.</p>
<p>I want to be able to loop through the column in location and update the values in pay but if there is data in pay I want to retain it, only updating the values from location instead.</p>
<p>My first thought is using apply lambda like so but I am unable to put in the else condition that would just allow me to retain the value of pay if location is blank</p>
<pre><code>df['Pay'] = df['Location'].apply(lambda x : x if x !='' else [retain the value of Pay] )
</code></pre>
<p>What am I doing wrong?</p>
<p>Thank you for reading</p>
| 69,217,687
| 2021-09-17T03:13:25.940000
| 2
| null | 0
| 122
|
python|pandas
|
<h2><strong><code>replace and fillna</code> Method</strong></h2>
<pre><code>data = {
'pay': [2, 4, 6, 8],
'location': ['', 'MA', 'CA', ''],
}
df = pd.DataFrame(data)
df['pay'] = df.location.replace({'': None}).fillna(df.pay)
</code></pre>
<h2><strong><code>np.where</code></strong> Method</h2>
<pre><code>df['pay'] = np.where(df.location.ne(''), df.location, df.pay)
</code></pre>
<p>Output:</p>
<pre><code>print(df['pay'])
0 2
1 MA
2 CA
3 8
Name: pay, dtype: object
</code></pre>
| 2021-09-17T04:12:11.580000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.update.html
|
pandas.DataFrame.update#
pandas.DataFrame.update#
DataFrame.update(other, join='left', overwrite=True, filter_func=None, errors='ignore')[source]#
Modify in place using non-NA values from another DataFrame.
Aligns on indices. There is no return value.
Parameters
otherDataFrame, or object coercible into a DataFrameShould have at least one matching index/column label
with the original DataFrame. If a Series is passed,
its name attribute must be set, and that will be
used as the column name to align with the original DataFrame.
join{‘left’}, default ‘left’Only left join is implemented, keeping the index and columns of the
original object.
overwritebool, default TrueHow to handle non-NA values for overlapping keys:
replace and fillna Method
data = {
'pay': [2, 4, 6, 8],
'location': ['', 'MA', 'CA', ''],
}
df = pd.DataFrame(data)
df['pay'] = df.location.replace({'': None}).fillna(df.pay)
np.where Method
df['pay'] = np.where(df.location.ne(''), df.location, df.pay)
Output:
print(df['pay'])
0 2
1 MA
2 CA
3 8
Name: pay, dtype: object
True: overwrite original DataFrame’s values
with values from other.
False: only update values that are NA in
the original DataFrame.
filter_funccallable(1d-array) -> bool 1d-array, optionalCan choose to replace values other than NA. Return True for values
that should be updated.
errors{‘raise’, ‘ignore’}, default ‘ignore’If ‘raise’, will raise a ValueError if the DataFrame and other
both contain non-NA data in the same place.
Returns
Nonemethod directly changes calling object
Raises
ValueError
When errors=’raise’ and there’s overlapping non-NA data.
When errors is not either ‘ignore’ or ‘raise’
NotImplementedError
If join != ‘left’
See also
dict.updateSimilar method for dictionaries.
DataFrame.mergeFor column(s)-on-column(s) operations.
Examples
>>> df = pd.DataFrame({'A': [1, 2, 3],
... 'B': [400, 500, 600]})
>>> new_df = pd.DataFrame({'B': [4, 5, 6],
... 'C': [7, 8, 9]})
>>> df.update(new_df)
>>> df
A B
0 1 4
1 2 5
2 3 6
The DataFrame’s length does not increase as a result of the update,
only values at matching index/column labels are updated.
>>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
... 'B': ['x', 'y', 'z']})
>>> new_df = pd.DataFrame({'B': ['d', 'e', 'f', 'g', 'h', 'i']})
>>> df.update(new_df)
>>> df
A B
0 a d
1 b e
2 c f
For Series, its name attribute must be set.
>>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
... 'B': ['x', 'y', 'z']})
>>> new_column = pd.Series(['d', 'e'], name='B', index=[0, 2])
>>> df.update(new_column)
>>> df
A B
0 a d
1 b y
2 c e
>>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
... 'B': ['x', 'y', 'z']})
>>> new_df = pd.DataFrame({'B': ['d', 'e']}, index=[1, 2])
>>> df.update(new_df)
>>> df
A B
0 a x
1 b d
2 c e
If other contains NaNs the corresponding values are not updated
in the original dataframe.
>>> df = pd.DataFrame({'A': [1, 2, 3],
... 'B': [400, 500, 600]})
>>> new_df = pd.DataFrame({'B': [4, np.nan, 6]})
>>> df.update(new_df)
>>> df
A B
0 1 4.0
1 2 500.0
2 3 6.0
| 729
| 1,075
|
How am I able to 'update' values from one column to another in the same dataframe?
I have this dataframe that houses job details and there are columns for Pay, location, Job position, Job Description, other information, Etc.
However sometimes the values in pay might not be present and is available in location for example.
I want to be able to loop through the column in location and update the values in pay but if there is data in pay I want to retain it, only updating the values from location instead.
My first thought is using apply lambda like so but I am unable to put in the else condition that would just allow me to retain the value of pay if location is blank
df['Pay'] = df['Location'].apply(lambda x : x if x !='' else [retain the value of Pay] )
What am I doing wrong?
Thank you for reading
|
67,188,834
|
faster to check if an element in a pandas series exist in a list of list
|
<p>What would be the easiest and a faster way of checking if an element in a <strong>series</strong> exist in a <strong>list of list</strong>. For Example, i have a series and a list of lists as follows ? I have a loop that does exactly this but it is a bit slow, so I want a faster way to do this.</p>
<pre><code>groups = []
for desc in descs:
for i in range(len(list_of_list)):
if desc in list_of_list[i]:
groups.append(i)
list_of_list = [['rfnd sms chrgs'],
['loan payment receipt'],
['zen june2018 aksg sal 1231552',
'zen july2018 aksg sal 1411191',
'zen aug2018 aksg mda sal 16014'],
['cshw agnes john udo mrs ',
'cshw agnes john udo',
'cshw agnes udo',
'cshw agnes john'],
['sms alert charge outstanding'],
['maint fee recovery jul 2018', 'vat maint fee recovery jul 2018'],
['sept2018 aksg mda sal 20028',
'oct2018 aksg mda sal 21929',
'nov2018 aksg mda sal 25229'],
['sms alert charges 28th sep 26th oct 2018']]
descs =
1959 rfnd sms chrgs
1960 loan payment receipt
1961 zen june2018 aksg sal 1231552
1962 loan payment receipt
1963 cshw agnes john udo mrs
1964 maint fee frm 31 may 2018 28 jun 2018
1965 vat maint fee frm 31 may 2018 28 jun 2018
1966 sms alert charge outstanding
1967 loan payment receipt
1968 zen july2018 aksg sal 1411191
1969 loan payment receipt
</code></pre>
<p>Expected output is like a list of numbers</p>
<pre><code>e.g [1,2,3,4,5,6]
</code></pre>
| 67,189,612
| 2021-04-21T03:10:54.180000
| 1
| null | 1
| 125
|
python|pandas
|
<p>Prepare your data:</p>
<pre><code># merge a series without a name is not allowed
descs = descs.rename("descs")
# convert list of lists to a series
ll = pd.Series(list_of_list).explode().reset_index()
ll.columns = ["pos", "descs"]
</code></pre>
<pre><code>>>> descs
1959 rfnd sms chrgs
1960 loan payment receipt
1961 zen june2018 aksg sal 1231552
1962 loan payment receipt
1963 cshw agnes john udo mrs
1964 maint fee frm 31 may 2018 28 jun 2018
1965 maint fee frm 31 may 2018 28 jun 2018
1966 sms alert charge outstanding
1967 loan payment receipt
1968 zen july2018 aksg sal 1411191
1969 loan payment receipt
Name: descs, dtype: object
>>> ll
pos descs
0 0 rfnd sms chrgs
1 1 loan payment receipt
2 2 zen june2018 aksg sal 1231552
3 2 zen july2018 aksg sal 1411191
4 2 zen aug2018 aksg mda sal 16014
5 3 cshw agnes john udo mrs
6 3 cshw agnes john udo
7 3 cshw agnes udo
8 3 cshw agnes john
9 4 sms alert charge outstanding
10 5 maint fee recovery jul 2018
11 5 vat maint fee recovery jul 2018
12 6 sept2018 aksg mda sal 20028
13 6 oct2018 aksg mda sal 21929
14 6 nov2018 aksg mda sal 25229
15 7 sms alert charges 28th sep 26th oct 2018
</code></pre>
<p>Now you can merge <code>descs</code> and <code>ll</code> to get your list of numbers:</p>
<pre><code>df = pd.merge(descs, ll, on="descs", how="left").set_index(descs.index)
</code></pre>
<pre><code>>>> df
descs pos
1959 rfnd sms chrgs 0.0
1960 loan payment receipt 1.0
1961 zen june2018 aksg sal 1231552 2.0
1962 loan payment receipt 1.0
1963 cshw agnes john udo mrs 3.0
1964 maint fee frm 31 may 2018 28 jun 2018 NaN
1965 maint fee frm 31 may 2018 28 jun 2018 NaN
1966 sms alert charge outstanding 4.0
1967 loan payment receipt 1.0
1968 zen july2018 aksg sal 1411191 2.0
1969 loan payment receipt 1.0
</code></pre>
<p>Check:</p>
<pre><code>>>> df.loc[1966, "descs"]
'sms alert charge outstanding'
>>> list_of_list[int(df.loc[1966, "pos"])]
['sms alert charge outstanding']
</code></pre>
<p><strong>Another method</strong>:</p>
<p>This method takes advantage of Categorical data type. It could be faster.</p>
<pre><code>>>> ll = pd.Series(list_of_list).explode()
>>> descs.astype("category").map(pd.Series(ll.index, index=ll.astype("category")))
1959 0.0
1960 1.0
1961 2.0
1962 1.0
1963 3.0
1964 NaN
1965 NaN
1966 4.0
1967 1.0
1968 2.0
1969 1.0
dtype: float64
</code></pre>
| 2021-04-21T05:09:38.223000
| 1
|
https://pandas.pydata.org/docs/user_guide/io.html
|
Prepare your data:
# merge a series without a name is not allowed
descs = descs.rename("descs")
# convert list of lists to a series
ll = pd.Series(list_of_list).explode().reset_index()
ll.columns = ["pos", "descs"]
>>> descs
1959 rfnd sms chrgs
1960 loan payment receipt
1961 zen june2018 aksg sal 1231552
1962 loan payment receipt
1963 cshw agnes john udo mrs
1964 maint fee frm 31 may 2018 28 jun 2018
1965 maint fee frm 31 may 2018 28 jun 2018
1966 sms alert charge outstanding
1967 loan payment receipt
1968 zen july2018 aksg sal 1411191
1969 loan payment receipt
Name: descs, dtype: object
>>> ll
pos descs
0 0 rfnd sms chrgs
1 1 loan payment receipt
2 2 zen june2018 aksg sal 1231552
3 2 zen july2018 aksg sal 1411191
4 2 zen aug2018 aksg mda sal 16014
5 3 cshw agnes john udo mrs
6 3 cshw agnes john udo
7 3 cshw agnes udo
8 3 cshw agnes john
9 4 sms alert charge outstanding
10 5 maint fee recovery jul 2018
11 5 vat maint fee recovery jul 2018
12 6 sept2018 aksg mda sal 20028
13 6 oct2018 aksg mda sal 21929
14 6 nov2018 aksg mda sal 25229
15 7 sms alert charges 28th sep 26th oct 2018
Now you can merge descs and ll to get your list of numbers:
df = pd.merge(descs, ll, on="descs", how="left").set_index(descs.index)
>>> df
descs pos
1959 rfnd sms chrgs 0.0
1960 loan payment receipt 1.0
1961 zen june2018 aksg sal 1231552 2.0
1962 loan payment receipt 1.0
1963 cshw agnes john udo mrs 3.0
1964 maint fee frm 31 may 2018 28 jun 2018 NaN
1965 maint fee frm 31 may 2018 28 jun 2018 NaN
1966 sms alert charge outstanding 4.0
1967 loan payment receipt 1.0
1968 zen july2018 aksg sal 1411191 2.0
1969 loan payment receipt 1.0
Check:
>>> df.loc[1966, "descs"]
'sms alert charge outstanding'
>>> list_of_list[int(df.loc[1966, "pos"])]
['sms alert charge outstanding']
Another method:
This method takes advantage of Categorical data type. It could be faster.
>>> ll = pd.Series(list_of_list).explode()
>>> descs.astype("category").map(pd.Series(ll.index, index=ll.astype("category")))
1959 0.0
1960 1.0
1961 2.0
1962 1.0
1963 3.0
1964 NaN
1965 NaN
1966 4.0
1967 1.0
1968 2.0
1969 1.0
dtype: float64
| 0
| 2,850
|
faster to check if an element in a pandas series exist in a list of list
What would be the easiest and a faster way of checking if an element in a series exist in a list of list. For Example, i have a series and a list of lists as follows ? I have a loop that does exactly this but it is a bit slow, so I want a faster way to do this.
groups = []
for desc in descs:
for i in range(len(list_of_list)):
if desc in list_of_list[i]:
groups.append(i)
list_of_list = [['rfnd sms chrgs'],
['loan payment receipt'],
['zen june2018 aksg sal 1231552',
'zen july2018 aksg sal 1411191',
'zen aug2018 aksg mda sal 16014'],
['cshw agnes john udo mrs ',
'cshw agnes john udo',
'cshw agnes udo',
'cshw agnes john'],
['sms alert charge outstanding'],
['maint fee recovery jul 2018', 'vat maint fee recovery jul 2018'],
['sept2018 aksg mda sal 20028',
'oct2018 aksg mda sal 21929',
'nov2018 aksg mda sal 25229'],
['sms alert charges 28th sep 26th oct 2018']]
descs =
1959 rfnd sms chrgs
1960 loan payment receipt
1961 zen june2018 aksg sal 1231552
1962 loan payment receipt
1963 cshw agnes john udo mrs
1964 maint fee frm 31 may 2018 28 jun 2018
1965 vat maint fee frm 31 may 2018 28 jun 2018
1966 sms alert charge outstanding
1967 loan payment receipt
1968 zen july2018 aksg sal 1411191
1969 loan payment receipt
Expected output is like a list of numbers
e.g [1,2,3,4,5,6]
|
62,032,101
|
replace column value if string does not start with "x"
|
<p>I have a df read in with pandas, and in a specific row, if the value does not start with a specific string (in this case a file path), I want to overwrite the contents with values from a different column, same row.</p>
<pre><code>for R in df:
if not R['O_path'].str.startswith('\\correct\\path\\here'):
R['O_path'] = R['N_path']
else:
continue
</code></pre>
<p>I keep getting "TypeError: string indices must be integers" Clearly I'm doing something wrong, any ideas?</p>
| 62,032,211
| 2020-05-26T21:59:40.190000
| 2
| null | 1
| 126
|
python|pandas
|
<p>I think you are looking for this:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[df["O_path"].str.startswith('\\correct\\path\\here'), 'O_path'] = df['N_path']
</code></pre>
| 2020-05-26T22:08:11.293000
| 1
|
https://pandas.pydata.org/docs/user_guide/text.html
|
Working with text data#
Working with text data#
Text data types#
New in version 1.0.0.
There are two ways to store text data in pandas:
object -dtype NumPy array.
StringDtype extension type.
We recommend using StringDtype to store text data.
Prior to pandas 1.0, object dtype was the only option. This was unfortunate
for many reasons:
You can accidentally store a mixture of strings and non-strings in an
object dtype array. It’s better to have a dedicated dtype.
object dtype breaks dtype-specific operations like DataFrame.select_dtypes().
There isn’t a clear way to select just text while excluding non-text
but still object-dtype columns.
When reading code, the contents of an object dtype array is less clear
than 'string'.
Currently, the performance of object dtype arrays of strings and
arrays.StringArray are about the same. We expect future enhancements
to significantly increase the performance and lower the memory overhead of
StringArray.
Warning
StringArray is currently considered experimental. The implementation
and parts of the API may change without warning.
I think you are looking for this:
df.loc[df["O_path"].str.startswith('\\correct\\path\\here'), 'O_path'] = df['N_path']
For backwards-compatibility, object dtype remains the default type we
infer a list of strings to
In [1]: pd.Series(["a", "b", "c"])
Out[1]:
0 a
1 b
2 c
dtype: object
To explicitly request string dtype, specify the dtype
In [2]: pd.Series(["a", "b", "c"], dtype="string")
Out[2]:
0 a
1 b
2 c
dtype: string
In [3]: pd.Series(["a", "b", "c"], dtype=pd.StringDtype())
Out[3]:
0 a
1 b
2 c
dtype: string
Or astype after the Series or DataFrame is created
In [4]: s = pd.Series(["a", "b", "c"])
In [5]: s
Out[5]:
0 a
1 b
2 c
dtype: object
In [6]: s.astype("string")
Out[6]:
0 a
1 b
2 c
dtype: string
Changed in version 1.1.0.
You can also use StringDtype/"string" as the dtype on non-string data and
it will be converted to string dtype:
In [7]: s = pd.Series(["a", 2, np.nan], dtype="string")
In [8]: s
Out[8]:
0 a
1 2
2 <NA>
dtype: string
In [9]: type(s[1])
Out[9]: str
or convert from existing pandas data:
In [10]: s1 = pd.Series([1, 2, np.nan], dtype="Int64")
In [11]: s1
Out[11]:
0 1
1 2
2 <NA>
dtype: Int64
In [12]: s2 = s1.astype("string")
In [13]: s2
Out[13]:
0 1
1 2
2 <NA>
dtype: string
In [14]: type(s2[0])
Out[14]: str
Behavior differences#
These are places where the behavior of StringDtype objects differ from
object dtype
For StringDtype, string accessor methods
that return numeric output will always return a nullable integer dtype,
rather than either int or float dtype, depending on the presence of NA values.
Methods returning boolean output will return a nullable boolean dtype.
In [15]: s = pd.Series(["a", None, "b"], dtype="string")
In [16]: s
Out[16]:
0 a
1 <NA>
2 b
dtype: string
In [17]: s.str.count("a")
Out[17]:
0 1
1 <NA>
2 0
dtype: Int64
In [18]: s.dropna().str.count("a")
Out[18]:
0 1
2 0
dtype: Int64
Both outputs are Int64 dtype. Compare that with object-dtype
In [19]: s2 = pd.Series(["a", None, "b"], dtype="object")
In [20]: s2.str.count("a")
Out[20]:
0 1.0
1 NaN
2 0.0
dtype: float64
In [21]: s2.dropna().str.count("a")
Out[21]:
0 1
2 0
dtype: int64
When NA values are present, the output dtype is float64. Similarly for
methods returning boolean values.
In [22]: s.str.isdigit()
Out[22]:
0 False
1 <NA>
2 False
dtype: boolean
In [23]: s.str.match("a")
Out[23]:
0 True
1 <NA>
2 False
dtype: boolean
Some string methods, like Series.str.decode() are not available
on StringArray because StringArray only holds strings, not
bytes.
In comparison operations, arrays.StringArray and Series backed
by a StringArray will return an object with BooleanDtype,
rather than a bool dtype object. Missing values in a StringArray
will propagate in comparison operations, rather than always comparing
unequal like numpy.nan.
Everything else that follows in the rest of this document applies equally to
string and object dtype.
String methods#
Series and Index are equipped with a set of string processing methods
that make it easy to operate on each element of the array. Perhaps most
importantly, these methods exclude missing/NA values automatically. These are
accessed via the str attribute and generally have names matching
the equivalent (scalar) built-in string methods:
In [24]: s = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
....: )
....:
In [25]: s.str.lower()
Out[25]:
0 a
1 b
2 c
3 aaba
4 baca
5 <NA>
6 caba
7 dog
8 cat
dtype: string
In [26]: s.str.upper()
Out[26]:
0 A
1 B
2 C
3 AABA
4 BACA
5 <NA>
6 CABA
7 DOG
8 CAT
dtype: string
In [27]: s.str.len()
Out[27]:
0 1
1 1
2 1
3 4
4 4
5 <NA>
6 4
7 3
8 3
dtype: Int64
In [28]: idx = pd.Index([" jack", "jill ", " jesse ", "frank"])
In [29]: idx.str.strip()
Out[29]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
In [30]: idx.str.lstrip()
Out[30]: Index(['jack', 'jill ', 'jesse ', 'frank'], dtype='object')
In [31]: idx.str.rstrip()
Out[31]: Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')
The string methods on Index are especially useful for cleaning up or
transforming DataFrame columns. For instance, you may have columns with
leading or trailing whitespace:
In [32]: df = pd.DataFrame(
....: np.random.randn(3, 2), columns=[" Column A ", " Column B "], index=range(3)
....: )
....:
In [33]: df
Out[33]:
Column A Column B
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Since df.columns is an Index object, we can use the .str accessor
In [34]: df.columns.str.strip()
Out[34]: Index(['Column A', 'Column B'], dtype='object')
In [35]: df.columns.str.lower()
Out[35]: Index([' column a ', ' column b '], dtype='object')
These string methods can then be used to clean up the columns as needed.
Here we are removing leading and trailing whitespaces, lower casing all names,
and replacing any remaining whitespaces with underscores:
In [36]: df.columns = df.columns.str.strip().str.lower().str.replace(" ", "_")
In [37]: df
Out[37]:
column_a column_b
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Note
If you have a Series where lots of elements are repeated
(i.e. the number of unique elements in the Series is a lot smaller than the length of the
Series), it can be faster to convert the original Series to one of type
category and then use .str.<method> or .dt.<property> on that.
The performance difference comes from the fact that, for Series of type category, the
string operations are done on the .categories and not on each element of the
Series.
Please note that a Series of type category with string .categories has
some limitations in comparison to Series of type string (e.g. you can’t add strings to
each other: s + " " + s won’t work if s is a Series of type category). Also,
.str methods which operate on elements of type list are not available on such a
Series.
Warning
Before v.0.25.0, the .str-accessor did only the most rudimentary type checks. Starting with
v.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.
Generally speaking, the .str accessor is intended to work only on strings. With very few
exceptions, other uses are not supported, and may be disabled at a later point.
Splitting and replacing strings#
Methods like split return a Series of lists:
In [38]: s2 = pd.Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype="string")
In [39]: s2.str.split("_")
Out[39]:
0 [a, b, c]
1 [c, d, e]
2 <NA>
3 [f, g, h]
dtype: object
Elements in the split lists can be accessed using get or [] notation:
In [40]: s2.str.split("_").str.get(1)
Out[40]:
0 b
1 d
2 <NA>
3 g
dtype: object
In [41]: s2.str.split("_").str[1]
Out[41]:
0 b
1 d
2 <NA>
3 g
dtype: object
It is easy to expand this to return a DataFrame using expand.
In [42]: s2.str.split("_", expand=True)
Out[42]:
0 1 2
0 a b c
1 c d e
2 <NA> <NA> <NA>
3 f g h
When original Series has StringDtype, the output columns will all
be StringDtype as well.
It is also possible to limit the number of splits:
In [43]: s2.str.split("_", expand=True, n=1)
Out[43]:
0 1
0 a b_c
1 c d_e
2 <NA> <NA>
3 f g_h
rsplit is similar to split except it works in the reverse direction,
i.e., from the end of the string to the beginning of the string:
In [44]: s2.str.rsplit("_", expand=True, n=1)
Out[44]:
0 1
0 a_b c
1 c_d e
2 <NA> <NA>
3 f_g h
replace optionally uses regular expressions:
In [45]: s3 = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", "", np.nan, "CABA", "dog", "cat"],
....: dtype="string",
....: )
....:
In [46]: s3
Out[46]:
0 A
1 B
2 C
3 Aaba
4 Baca
5
6 <NA>
7 CABA
8 dog
9 cat
dtype: string
In [47]: s3.str.replace("^.a|dog", "XX-XX ", case=False, regex=True)
Out[47]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Warning
Some caution must be taken when dealing with regular expressions! The current behavior
is to treat single character patterns as literal strings, even when regex is set
to True. This behavior is deprecated and will be removed in a future version so
that the regex keyword is always respected.
Changed in version 1.2.0.
If you want literal replacement of a string (equivalent to str.replace()), you
can set the optional regex parameter to False, rather than escaping each
character. In this case both pat and repl must be strings:
In [48]: dollars = pd.Series(["12", "-$10", "$10,000"], dtype="string")
# These lines are equivalent
In [49]: dollars.str.replace(r"-\$", "-", regex=True)
Out[49]:
0 12
1 -10
2 $10,000
dtype: string
In [50]: dollars.str.replace("-$", "-", regex=False)
Out[50]:
0 12
1 -10
2 $10,000
dtype: string
The replace method can also take a callable as replacement. It is called
on every pat using re.sub(). The callable should expect one
positional argument (a regex object) and return a string.
# Reverse every lowercase alphabetic word
In [51]: pat = r"[a-z]+"
In [52]: def repl(m):
....: return m.group(0)[::-1]
....:
In [53]: pd.Series(["foo 123", "bar baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[53]:
0 oof 123
1 rab zab
2 <NA>
dtype: string
# Using regex groups
In [54]: pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"
In [55]: def repl(m):
....: return m.group("two").swapcase()
....:
In [56]: pd.Series(["Foo Bar Baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[56]:
0 bAR
1 <NA>
dtype: string
The replace method also accepts a compiled regular expression object
from re.compile() as a pattern. All flags should be included in the
compiled regular expression object.
In [57]: import re
In [58]: regex_pat = re.compile(r"^.a|dog", flags=re.IGNORECASE)
In [59]: s3.str.replace(regex_pat, "XX-XX ", regex=True)
Out[59]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Including a flags argument when calling replace with a compiled
regular expression object will raise a ValueError.
In [60]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE)
---------------------------------------------------------------------------
ValueError: case and flags cannot be set when pat is a compiled regex
removeprefix and removesuffix have the same effect as str.removeprefix and str.removesuffix added in Python 3.9
<https://docs.python.org/3/library/stdtypes.html#str.removeprefix>`__:
New in version 1.4.0.
In [61]: s = pd.Series(["str_foo", "str_bar", "no_prefix"])
In [62]: s.str.removeprefix("str_")
Out[62]:
0 foo
1 bar
2 no_prefix
dtype: object
In [63]: s = pd.Series(["foo_str", "bar_str", "no_suffix"])
In [64]: s.str.removesuffix("_str")
Out[64]:
0 foo
1 bar
2 no_suffix
dtype: object
Concatenation#
There are several ways to concatenate a Series or Index, either with itself or others, all based on cat(),
resp. Index.str.cat.
Concatenating a single Series into a string#
The content of a Series (or Index) can be concatenated:
In [65]: s = pd.Series(["a", "b", "c", "d"], dtype="string")
In [66]: s.str.cat(sep=",")
Out[66]: 'a,b,c,d'
If not specified, the keyword sep for the separator defaults to the empty string, sep='':
In [67]: s.str.cat()
Out[67]: 'abcd'
By default, missing values are ignored. Using na_rep, they can be given a representation:
In [68]: t = pd.Series(["a", "b", np.nan, "d"], dtype="string")
In [69]: t.str.cat(sep=",")
Out[69]: 'a,b,d'
In [70]: t.str.cat(sep=",", na_rep="-")
Out[70]: 'a,b,-,d'
Concatenating a Series and something list-like into a Series#
The first argument to cat() can be a list-like object, provided that it matches the length of the calling Series (or Index).
In [71]: s.str.cat(["A", "B", "C", "D"])
Out[71]:
0 aA
1 bB
2 cC
3 dD
dtype: string
Missing values on either side will result in missing values in the result as well, unless na_rep is specified:
In [72]: s.str.cat(t)
Out[72]:
0 aa
1 bb
2 <NA>
3 dd
dtype: string
In [73]: s.str.cat(t, na_rep="-")
Out[73]:
0 aa
1 bb
2 c-
3 dd
dtype: string
Concatenating a Series and something array-like into a Series#
The parameter others can also be two-dimensional. In this case, the number or rows must match the lengths of the calling Series (or Index).
In [74]: d = pd.concat([t, s], axis=1)
In [75]: s
Out[75]:
0 a
1 b
2 c
3 d
dtype: string
In [76]: d
Out[76]:
0 1
0 a a
1 b b
2 <NA> c
3 d d
In [77]: s.str.cat(d, na_rep="-")
Out[77]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and an indexed object into a Series, with alignment#
For concatenation with a Series or DataFrame, it is possible to align the indexes before concatenation by setting
the join-keyword.
In [78]: u = pd.Series(["b", "d", "a", "c"], index=[1, 3, 0, 2], dtype="string")
In [79]: s
Out[79]:
0 a
1 b
2 c
3 d
dtype: string
In [80]: u
Out[80]:
1 b
3 d
0 a
2 c
dtype: string
In [81]: s.str.cat(u)
Out[81]:
0 aa
1 bb
2 cc
3 dd
dtype: string
In [82]: s.str.cat(u, join="left")
Out[82]:
0 aa
1 bb
2 cc
3 dd
dtype: string
Warning
If the join keyword is not passed, the method cat() will currently fall back to the behavior before version 0.23.0 (i.e. no alignment),
but a FutureWarning will be raised if any of the involved indexes differ, since this default will change to join='left' in a future version.
The usual options are available for join (one of 'left', 'outer', 'inner', 'right').
In particular, alignment also means that the different lengths do not need to coincide anymore.
In [83]: v = pd.Series(["z", "a", "b", "d", "e"], index=[-1, 0, 1, 3, 4], dtype="string")
In [84]: s
Out[84]:
0 a
1 b
2 c
3 d
dtype: string
In [85]: v
Out[85]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [86]: s.str.cat(v, join="left", na_rep="-")
Out[86]:
0 aa
1 bb
2 c-
3 dd
dtype: string
In [87]: s.str.cat(v, join="outer", na_rep="-")
Out[87]:
-1 -z
0 aa
1 bb
2 c-
3 dd
4 -e
dtype: string
The same alignment can be used when others is a DataFrame:
In [88]: f = d.loc[[3, 2, 1, 0], :]
In [89]: s
Out[89]:
0 a
1 b
2 c
3 d
dtype: string
In [90]: f
Out[90]:
0 1
3 d d
2 <NA> c
1 b b
0 a a
In [91]: s.str.cat(f, join="left", na_rep="-")
Out[91]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and many objects into a Series#
Several array-like items (specifically: Series, Index, and 1-dimensional variants of np.ndarray)
can be combined in a list-like container (including iterators, dict-views, etc.).
In [92]: s
Out[92]:
0 a
1 b
2 c
3 d
dtype: string
In [93]: u
Out[93]:
1 b
3 d
0 a
2 c
dtype: string
In [94]: s.str.cat([u, u.to_numpy()], join="left")
Out[94]:
0 aab
1 bbd
2 cca
3 ddc
dtype: string
All elements without an index (e.g. np.ndarray) within the passed list-like must match in length to the calling Series (or Index),
but Series and Index may have arbitrary length (as long as alignment is not disabled with join=None):
In [95]: v
Out[95]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [96]: s.str.cat([v, u, u.to_numpy()], join="outer", na_rep="-")
Out[96]:
-1 -z--
0 aaab
1 bbbd
2 c-ca
3 dddc
4 -e--
dtype: string
If using join='right' on a list-like of others that contains different indexes,
the union of these indexes will be used as the basis for the final concatenation:
In [97]: u.loc[[3]]
Out[97]:
3 d
dtype: string
In [98]: v.loc[[-1, 0]]
Out[98]:
-1 z
0 a
dtype: string
In [99]: s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join="right", na_rep="-")
Out[99]:
3 dd-
-1 --z
0 a-a
dtype: string
Indexing with .str#
You can use [] notation to directly index by position locations. If you index past the end
of the string, the result will be a NaN.
In [100]: s = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [101]: s.str[0]
Out[101]:
0 A
1 B
2 C
3 A
4 B
5 <NA>
6 C
7 d
8 c
dtype: string
In [102]: s.str[1]
Out[102]:
0 <NA>
1 <NA>
2 <NA>
3 a
4 a
5 <NA>
6 A
7 o
8 a
dtype: string
Extracting substrings#
Extract first match in each subject (extract)#
Warning
Before version 0.23, argument expand of the extract method defaulted to
False. When expand=False, expand returns a Series, Index, or
DataFrame, depending on the subject and regular expression
pattern. When expand=True, it always returns a DataFrame,
which is more consistent and less confusing from the perspective of a user.
expand=True has been the default since version 0.23.0.
The extract method accepts a regular expression with at least one
capture group.
Extracting a regular expression with more than one group returns a
DataFrame with one column per group.
In [103]: pd.Series(
.....: ["a1", "b2", "c3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])(\d)", expand=False)
.....:
Out[103]:
0 1
0 a 1
1 b 2
2 <NA> <NA>
Elements that do not match return a row filled with NaN. Thus, a
Series of messy strings can be “converted” into a like-indexed Series
or DataFrame of cleaned-up or more useful strings, without
necessitating get() to access tuples or re.match objects. The
dtype of the result is always object, even if no match is found and
the result only contains NaN.
Named groups like
In [104]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(
.....: r"(?P<letter>[ab])(?P<digit>\d)", expand=False
.....: )
.....:
Out[104]:
letter digit
0 a 1
1 b 2
2 <NA> <NA>
and optional groups like
In [105]: pd.Series(
.....: ["a1", "b2", "3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])?(\d)", expand=False)
.....:
Out[105]:
0 1
0 a 1
1 b 2
2 <NA> 3
can also be used. Note that any capture group names in the regular
expression will be used for column names; otherwise capture group
numbers will be used.
Extracting a regular expression with one group returns a DataFrame
with one column if expand=True.
In [106]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=True)
Out[106]:
0
0 1
1 2
2 <NA>
It returns a Series if expand=False.
In [107]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=False)
Out[107]:
0 1
1 2
2 <NA>
dtype: string
Calling on an Index with a regex with exactly one capture group
returns a DataFrame with one column if expand=True.
In [108]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"], dtype="string")
In [109]: s
Out[109]:
A11 a1
B22 b2
C33 c3
dtype: string
In [110]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=True)
Out[110]:
letter
0 A
1 B
2 C
It returns an Index if expand=False.
In [111]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=False)
Out[111]: Index(['A', 'B', 'C'], dtype='object', name='letter')
Calling on an Index with a regex with more than one capture group
returns a DataFrame if expand=True.
In [112]: s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=True)
Out[112]:
letter 1
0 A 11
1 B 22
2 C 33
It raises ValueError if expand=False.
>>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
ValueError: only one regex group is supported with Index
The table below summarizes the behavior of extract(expand=False)
(input subject in first column, number of groups in regex in
first row)
1 group
>1 group
Index
Index
ValueError
Series
Series
DataFrame
Extract all matches in each subject (extractall)#
Unlike extract (which returns only the first match),
In [113]: s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"], dtype="string")
In [114]: s
Out[114]:
A a1a2
B b1
C c1
dtype: string
In [115]: two_groups = "(?P<letter>[a-z])(?P<digit>[0-9])"
In [116]: s.str.extract(two_groups, expand=True)
Out[116]:
letter digit
A a 1
B b 1
C c 1
the extractall method returns every match. The result of
extractall is always a DataFrame with a MultiIndex on its
rows. The last level of the MultiIndex is named match and
indicates the order in the subject.
In [117]: s.str.extractall(two_groups)
Out[117]:
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 c 1
When each subject string in the Series has exactly one match,
In [118]: s = pd.Series(["a3", "b3", "c2"], dtype="string")
In [119]: s
Out[119]:
0 a3
1 b3
2 c2
dtype: string
then extractall(pat).xs(0, level='match') gives the same result as
extract(pat).
In [120]: extract_result = s.str.extract(two_groups, expand=True)
In [121]: extract_result
Out[121]:
letter digit
0 a 3
1 b 3
2 c 2
In [122]: extractall_result = s.str.extractall(two_groups)
In [123]: extractall_result
Out[123]:
letter digit
match
0 0 a 3
1 0 b 3
2 0 c 2
In [124]: extractall_result.xs(0, level="match")
Out[124]:
letter digit
0 a 3
1 b 3
2 c 2
Index also supports .str.extractall. It returns a DataFrame which has the
same result as a Series.str.extractall with a default index (starts from 0).
In [125]: pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)
Out[125]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
In [126]: pd.Series(["a1a2", "b1", "c1"], dtype="string").str.extractall(two_groups)
Out[126]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
Testing for strings that match or contain a pattern#
You can check whether elements contain a pattern:
In [127]: pattern = r"[0-9][a-z]"
In [128]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.contains(pattern)
.....:
Out[128]:
0 False
1 False
2 True
3 True
4 True
5 True
dtype: boolean
Or whether elements match a pattern:
In [129]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.match(pattern)
.....:
Out[129]:
0 False
1 False
2 True
3 True
4 False
5 True
dtype: boolean
New in version 1.1.0.
In [130]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.fullmatch(pattern)
.....:
Out[130]:
0 False
1 False
2 True
3 True
4 False
5 False
dtype: boolean
Note
The distinction between match, fullmatch, and contains is strictness:
fullmatch tests whether the entire string matches the regular expression;
match tests whether there is a match of the regular expression that begins
at the first character of the string; and contains tests whether there is
a match of the regular expression at any position within the string.
The corresponding functions in the re package for these three match modes are
re.fullmatch,
re.match, and
re.search,
respectively.
Methods like match, fullmatch, contains, startswith, and
endswith take an extra na argument so missing values can be considered
True or False:
In [131]: s4 = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [132]: s4.str.contains("A", na=False)
Out[132]:
0 True
1 False
2 False
3 True
4 False
5 False
6 True
7 False
8 False
dtype: boolean
Creating indicator variables#
You can extract dummy variables from string columns.
For example if they are separated by a '|':
In [133]: s = pd.Series(["a", "a|b", np.nan, "a|c"], dtype="string")
In [134]: s.str.get_dummies(sep="|")
Out[134]:
a b c
0 1 0 0
1 1 1 0
2 0 0 0
3 1 0 1
String Index also supports get_dummies which returns a MultiIndex.
In [135]: idx = pd.Index(["a", "a|b", np.nan, "a|c"])
In [136]: idx.str.get_dummies(sep="|")
Out[136]:
MultiIndex([(1, 0, 0),
(1, 1, 0),
(0, 0, 0),
(1, 0, 1)],
names=['a', 'b', 'c'])
See also get_dummies().
Method summary#
Method
Description
cat()
Concatenate strings
split()
Split strings on delimiter
rsplit()
Split strings on delimiter working from the end of the string
get()
Index into each element (retrieve i-th element)
join()
Join strings in each element of the Series with passed separator
get_dummies()
Split strings on the delimiter returning DataFrame of dummy variables
contains()
Return boolean array if each string contains pattern/regex
replace()
Replace occurrences of pattern/regex/string with some other string or the return value of a callable given the occurrence
removeprefix()
Remove prefix from string, i.e. only remove if string starts with prefix.
removesuffix()
Remove suffix from string, i.e. only remove if string ends with suffix.
repeat()
Duplicate values (s.str.repeat(3) equivalent to x * 3)
pad()
Add whitespace to left, right, or both sides of strings
center()
Equivalent to str.center
ljust()
Equivalent to str.ljust
rjust()
Equivalent to str.rjust
zfill()
Equivalent to str.zfill
wrap()
Split long strings into lines with length less than a given width
slice()
Slice each string in the Series
slice_replace()
Replace slice in each string with passed value
count()
Count occurrences of pattern
startswith()
Equivalent to str.startswith(pat) for each element
endswith()
Equivalent to str.endswith(pat) for each element
findall()
Compute list of all occurrences of pattern/regex for each string
match()
Call re.match on each element, returning matched groups as list
extract()
Call re.search on each element, returning DataFrame with one row for each element and one column for each regex capture group
extractall()
Call re.findall on each element, returning DataFrame with one row for each match and one column for each regex capture group
len()
Compute string lengths
strip()
Equivalent to str.strip
rstrip()
Equivalent to str.rstrip
lstrip()
Equivalent to str.lstrip
partition()
Equivalent to str.partition
rpartition()
Equivalent to str.rpartition
lower()
Equivalent to str.lower
casefold()
Equivalent to str.casefold
upper()
Equivalent to str.upper
find()
Equivalent to str.find
rfind()
Equivalent to str.rfind
index()
Equivalent to str.index
rindex()
Equivalent to str.rindex
capitalize()
Equivalent to str.capitalize
swapcase()
Equivalent to str.swapcase
normalize()
Return Unicode normal form. Equivalent to unicodedata.normalize
translate()
Equivalent to str.translate
isalnum()
Equivalent to str.isalnum
isalpha()
Equivalent to str.isalpha
isdigit()
Equivalent to str.isdigit
isspace()
Equivalent to str.isspace
islower()
Equivalent to str.islower
isupper()
Equivalent to str.isupper
istitle()
Equivalent to str.istitle
isnumeric()
Equivalent to str.isnumeric
isdecimal()
Equivalent to str.isdecimal
| 1,088
| 1,208
|
replace column value if string does not start with "x"
I have a df read in with pandas, and in a specific row, if the value does not start with a specific string (in this case a file path), I want to overwrite the contents with values from a different column, same row.
for R in df:
if not R['O_path'].str.startswith('\\correct\\path\\here'):
R['O_path'] = R['N_path']
else:
continue
I keep getting "TypeError: string indices must be integers" Clearly I'm doing something wrong, any ideas?
|
65,081,065
|
Replacing values in multiple columns with Pandas based on conditions
|
<p>I have a very large dataframe where I only want to change the values in a small continuous subset of columns. Basically, in those columns the values are either integers or null. All I want is to replace the 0's and nulls with 'No' and everything else with 'Yes' only in those columns</p>
<p>In R, this can be done basically with a one liner:</p>
<pre><code>df <- df %>%
mutate_at(vars(MCI:BNP), ~factor(case_when(. > 0 ~ 'Yes',
TRUE ~ 'No')))
</code></pre>
<p>But we're working in Python and I can't quite figure out the equivalent using Pandas. I've been messing around with loc and iloc, which work fine when only changing a single column but I must be missing something when it comes to modifying multiple columns. And the answers I've found on other stackoverflow answers have all mostly been just changing the value in a single column based on some set of conditions</p>
<pre><code>col1 = df.columns.get_loc("MCI")
col2 = df.columns.get_loc("BNP")
df.iloc[:,col1:col2]
</code></pre>
<p>Will get me the columns I want, but trying to call loc doesn't work with multidimensional keys. I even tried it with the columns as a list instead of by integer index by creating a an extra variable</p>
<pre><code>binary_var = ['MCI','PVD','CVA','DEMENTIA','CPD','RD','PUD','MLD','DWOC','DWC','HoP','RND','MALIGNANCY','SLD','MST','HIV','AKF',
'ARMD','ASPHY','DEP','DWLK','DRUGA','DUOULC','FALL','FECAL','FLDELEX','FRAIL','GASTRICULC','GASTROULC','GLAU','HYPERKAL',
'HYPTEN','HYPOKAL','HYPOTHYR','HYPOXE','IMMUNOS','ISCHRT','LIPIDMETA','LOSWIGT','LOWBAK','MALNUT','OSTEO','PARKIN',
'PNEUM','RF','SEIZ','SD','TUML','UI','VI','MENTAL','FUROSEMIDE','METOPROLOL','ASPIRIN','OMEPRAZOLE','LISINOPRIL','DIGOXIN',
'ALDOSTERONE_ANTAGONIST','ACE_INHIBITOR','ANGIOTENSIN_RECEPTOR_BLOCKERS','BETA_BLOCKERSDIURETICHoP','BUN','CREATININE',
'SODIUM','POTASSIUM','HEMOGLOBIN','WBC_COUNT','CHLORIDE','ALBUMIN','TROPONIN','BNP']
df.loc[df[binary_var] == 0, binary_var]
</code></pre>
<p>But then it just can't find the index for those column names at all. I think Pandas also has problems converting columns that were originally integers into No/Yes. I don't need to do this in place, I'm probably just missing something simple that pandas has built in hopefully</p>
<p>In a very psuedo-code description, all I really want is this</p>
<pre><code>if(df.iloc[:,col1:col2] == 0 || df.iloc[:,col1:col2].isnull())
df ONLY in that subset of column = 'No'
else
df ONLY in that subset of column = 'Yes'
</code></pre>
| 65,081,187
| 2020-11-30T20:55:56.017000
| 1
| null | 1
| 383
|
python|pandas
|
<p>Use:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[:, 'MCI':'BNP'] = np.where(df.loc[:, 'MCI':'BNP'] > 0, 'Yes', 'No')
</code></pre>
| 2020-11-30T21:06:08.850000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.replace.html
|
pandas.DataFrame.replace#
pandas.DataFrame.replace#
DataFrame.replace(to_replace=None, value=_NoDefault.no_default, *, inplace=False, limit=None, regex=False, method=_NoDefault.no_default)[source]#
Replace values given in to_replace with value.
Values of the DataFrame are replaced with other values dynamically.
This differs from updating with .loc or .iloc, which require
you to specify a location to update with some value.
Parameters
to_replacestr, regex, list, dict, Series, int, float, or NoneHow to find the values that will be replaced.
numeric, str or regex:
numeric: numeric values equal to to_replace will be
replaced with value
str: string exactly matching to_replace will be replaced
with value
regex: regexs matching to_replace will be replaced with
Use:
df.loc[:, 'MCI':'BNP'] = np.where(df.loc[:, 'MCI':'BNP'] > 0, 'Yes', 'No')
value
list of str, regex, or numeric:
First, if to_replace and value are both lists, they
must be the same length.
Second, if regex=True then all of the strings in both
lists will be interpreted as regexs otherwise they will match
directly. This doesn’t matter much for value since there
are only a few possible substitution regexes you can use.
str, regex and numeric rules apply as above.
dict:
Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and
‘y’ with ‘z’. To use a dict in this way, the optional value
parameter should not be given.
For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’
and the value ‘z’ in column ‘b’ and replaces these values
with whatever is specified in value. The value parameter
should not be None in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}, are read as follows: look in column
‘a’ for the value ‘b’ and replace it with NaN. The optional value
parameter should not be specified to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
dictionary) cannot be regular expressions.
None:
This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also None then
this must be a nested dictionary or Series.
See the examples section for examples of each of these.
valuescalar, dict, list, str, regex, default NoneValue to replace any values matching to_replace with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
limitint, default NoneMaximum size gap to forward or backward fill.
regexbool or same types as to_replace, default FalseWhether to interpret to_replace and/or value as regular
expressions. If this is True then to_replace must be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
to_replace must be None.
method{‘pad’, ‘ffill’, ‘bfill’}The method to use when for replacement, when to_replace is a
scalar, list or tuple and value is None.
Changed in version 0.23.0: Added to DataFrame.
Returns
DataFrameObject after replacement.
Raises
AssertionError
If regex is not a bool and to_replace is not
None.
TypeError
If to_replace is not a scalar, array-like, dict, or None
If to_replace is a dict and value is not a list,
dict, ndarray, or Series
If to_replace is None and regex is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
When replacing multiple bool or datetime64 objects and
the arguments to to_replace does not match the type of the
value being replaced
ValueError
If a list or an ndarray is passed to to_replace and
value but they are not the same length.
See also
DataFrame.fillnaFill NA values.
DataFrame.whereReplace values based on boolean condition.
Series.str.replaceSimple string replacement.
Notes
Regex substitution is performed under the hood with re.sub. The
rules for substitution for re.sub are the same.
Regular expressions will only substitute on strings, meaning you
cannot provide, for example, a regular expression matching floating
point numbers and expect the columns in your frame that have a
numeric dtype to be matched. However, if those floating point
numbers are strings, then you can do this.
This method has a lot of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
When dict is used as the to_replace value, it is like
key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
Scalar `to_replace` and `value`
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
A B C
0 5 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
List-like `to_replace`
>>> df.replace([0, 1, 2, 3], 4)
A B C
0 4 5 a
1 4 6 b
2 4 7 c
3 4 8 d
4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
0 4 5 a
1 3 6 b
2 2 7 c
3 1 8 d
4 4 9 e
>>> s.replace([1, 2], method='bfill')
0 3
1 3
2 3
3 4
4 5
dtype: int64
dict-like `to_replace`
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
1 100 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0 100 100 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
0 100 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 400 9 e
Regular expression `to_replace`
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})
A B
0 new abc
1 xyz new
2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
1 new new
2 bait xyz
Compare the behavior of s.replace({'a': None}) and
s.replace('a', None) to understand the peculiarities
of the to_replace parameter:
>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
When one uses a dict as the to_replace value, it is like the
value(s) in the dict are equal to the value parameter.
s.replace({'a': None}) is equivalent to
s.replace(to_replace={'a': None}, value=None, method=None):
>>> s.replace({'a': None})
0 10
1 None
2 None
3 b
4 None
dtype: object
When value is not explicitly passed and to_replace is a scalar, list
or tuple, replace uses the method parameter (default ‘pad’) to do the
replacement. So this is why the ‘a’ values are being replaced by 10
in rows 1 and 2 and ‘b’ in row 4 in this case.
>>> s.replace('a')
0 10
1 10
2 10
3 b
4 b
dtype: object
On the other hand, if None is explicitly passed for value, it will
be respected:
>>> s.replace('a', None)
0 10
1 None
2 None
3 b
4 None
dtype: object
Changed in version 1.4.0: Previously the explicit None was silently ignored.
| 773
| 853
|
Replacing values in multiple columns with Pandas based on conditions
I have a very large dataframe where I only want to change the values in a small continuous subset of columns. Basically, in those columns the values are either integers or null. All I want is to replace the 0's and nulls with 'No' and everything else with 'Yes' only in those columns
In R, this can be done basically with a one liner:
df <- df %>%
mutate_at(vars(MCI:BNP), ~factor(case_when(. > 0 ~ 'Yes',
TRUE ~ 'No')))
But we're working in Python and I can't quite figure out the equivalent using Pandas. I've been messing around with loc and iloc, which work fine when only changing a single column but I must be missing something when it comes to modifying multiple columns. And the answers I've found on other stackoverflow answers have all mostly been just changing the value in a single column based on some set of conditions
col1 = df.columns.get_loc("MCI")
col2 = df.columns.get_loc("BNP")
df.iloc[:,col1:col2]
Will get me the columns I want, but trying to call loc doesn't work with multidimensional keys. I even tried it with the columns as a list instead of by integer index by creating a an extra variable
binary_var = ['MCI','PVD','CVA','DEMENTIA','CPD','RD','PUD','MLD','DWOC','DWC','HoP','RND','MALIGNANCY','SLD','MST','HIV','AKF',
'ARMD','ASPHY','DEP','DWLK','DRUGA','DUOULC','FALL','FECAL','FLDELEX','FRAIL','GASTRICULC','GASTROULC','GLAU','HYPERKAL',
'HYPTEN','HYPOKAL','HYPOTHYR','HYPOXE','IMMUNOS','ISCHRT','LIPIDMETA','LOSWIGT','LOWBAK','MALNUT','OSTEO','PARKIN',
'PNEUM','RF','SEIZ','SD','TUML','UI','VI','MENTAL','FUROSEMIDE','METOPROLOL','ASPIRIN','OMEPRAZOLE','LISINOPRIL','DIGOXIN',
'ALDOSTERONE_ANTAGONIST','ACE_INHIBITOR','ANGIOTENSIN_RECEPTOR_BLOCKERS','BETA_BLOCKERSDIURETICHoP','BUN','CREATININE',
'SODIUM','POTASSIUM','HEMOGLOBIN','WBC_COUNT','CHLORIDE','ALBUMIN','TROPONIN','BNP']
df.loc[df[binary_var] == 0, binary_var]
But then it just can't find the index for those column names at all. I think Pandas also has problems converting columns that were originally integers into No/Yes. I don't need to do this in place, I'm probably just missing something simple that pandas has built in hopefully
In a very psuedo-code description, all I really want is this
if(df.iloc[:,col1:col2] == 0 || df.iloc[:,col1:col2].isnull())
df ONLY in that subset of column = 'No'
else
df ONLY in that subset of column = 'Yes'
|
67,330,740
|
Pandas datatype inference
|
<p>I'm having troubles with datatype inference and therefore I decided to map datatype manually, however I have discovered I can't recognize Pandas datatype as expected.</p>
<pre><code>import pyarrow as pa
import pyarrow.parquet as pq
import pandas as pd
pd_to_pa_dtypes = {"object": pa.string(), "string": pa.string(), "double": pa.float64(),
pd.Int32Dtype(): pa.int32(), "int64": pa.int64(),
pd.Int64Dtype(): pa.int64(), "datetime64[ns]": pa.timestamp("ns", tz="UTC"),
pd.StringDtype(): pa.string(), '<M8[ns]': pa.timestamp("ns", tz="UTC")}
date = pd.to_datetime(["30/04/2021", "28/04/2021"], format="%d/%m/%Y")
df = pd.DataFrame(date)
print(df[0].dtype) # which print datetime64[ns]
pd_to_pa_dtypes[df[0].dtype] # KeyError: dtype('<M8[ns]')
# However I inserted in my dictionary both "datetime64[ns]" and "<M8[ns]", also if i check datatypes
df[0].dtype == "datetime64[ns]" # True
df[0].dtype == "<M8[ns]" # True
</code></pre>
<p>This happens also for some other datatypes, for example I had problems with int64, while some are mapped as expected.</p>
| 67,334,284
| 2021-04-30T08:36:42.950000
| 2
| null | 1
| 128
|
python|pandas
|
<p>The problem is:</p>
<pre><code>type(df[0].dtype) # is numpy.dtype and not str
</code></pre>
<p>therefore when you try to access the value in <code>pd_to_pa_dtypes</code> dictionary you get an error as <code>df[0].dtype</code> it is not the same as the key <code>str</code> you have in <code>pd_to_pa_dtypes</code>.</p>
<p>Now to your next potential question, you get true when your run the followings due to the implementation of <code>__eq__</code> of <code>dtype</code> class.</p>
<pre><code>df[0].dtype == "datetime64[ns]" # True
df[0].dtype == "<M8[ns]" # True
</code></pre>
<p>So to conclude with, use the str representation of your <code>dtype</code> object as follows:</p>
<pre><code>pd_to_pa_dtypes[df[0].dtype.str]
</code></pre>
| 2021-04-30T12:53:06.857000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.infer_objects.html
|
pandas.DataFrame.infer_objects#
pandas.DataFrame.infer_objects#
DataFrame.infer_objects()[source]#
Attempt to infer better dtypes for object columns.
The problem is:
type(df[0].dtype) # is numpy.dtype and not str
therefore when you try to access the value in pd_to_pa_dtypes dictionary you get an error as df[0].dtype it is not the same as the key str you have in pd_to_pa_dtypes.
Now to your next potential question, you get true when your run the followings due to the implementation of __eq__ of dtype class.
df[0].dtype == "datetime64[ns]" # True
df[0].dtype == "<M8[ns]" # True
So to conclude with, use the str representation of your dtype object as follows:
pd_to_pa_dtypes[df[0].dtype.str]
Attempts soft conversion of object-dtyped
columns, leaving non-object and unconvertible
columns unchanged. The inference rules are the
same as during normal Series/DataFrame construction.
Returns
convertedsame type as input object
See also
to_datetimeConvert argument to datetime.
to_timedeltaConvert argument to timedelta.
to_numericConvert argument to numeric type.
convert_dtypesConvert argument to best possible dtype.
Examples
>>> df = pd.DataFrame({"A": ["a", 1, 2, 3]})
>>> df = df.iloc[1:]
>>> df
A
1 1
2 2
3 3
>>> df.dtypes
A object
dtype: object
>>> df.infer_objects().dtypes
A int64
dtype: object
| 154
| 704
|
Pandas datatype inference
I'm having troubles with datatype inference and therefore I decided to map datatype manually, however I have discovered I can't recognize Pandas datatype as expected.
import pyarrow as pa
import pyarrow.parquet as pq
import pandas as pd
pd_to_pa_dtypes = {"object": pa.string(), "string": pa.string(), "double": pa.float64(),
pd.Int32Dtype(): pa.int32(), "int64": pa.int64(),
pd.Int64Dtype(): pa.int64(), "datetime64[ns]": pa.timestamp("ns", tz="UTC"),
pd.StringDtype(): pa.string(), '<M8[ns]': pa.timestamp("ns", tz="UTC")}
date = pd.to_datetime(["30/04/2021", "28/04/2021"], format="%d/%m/%Y")
df = pd.DataFrame(date)
print(df[0].dtype) # which print datetime64[ns]
pd_to_pa_dtypes[df[0].dtype] # KeyError: dtype('<M8[ns]')
# However I inserted in my dictionary both "datetime64[ns]" and "<M8[ns]", also if i check datatypes
df[0].dtype == "datetime64[ns]" # True
df[0].dtype == "<M8[ns]" # True
This happens also for some other datatypes, for example I had problems with int64, while some are mapped as expected.
|
61,966,045
|
dropna ( ) in multi-header dataframe
|
<p>I created a dataframe using <code>groupby</code> and <code>pd.cut</code> to calculate the mean, std and number of elements inside a bin. I used the <code>agg()</code>and this is the command I used:</p>
<pre><code>df_bin=df.groupby(pd.cut(df.In_X, ranges,include_lowest=True)).agg(['mean', 'std','size'])
</code></pre>
<p>df_bin looks like this: </p>
<pre><code> X Y
mean std size mean std size
In_X
(10.424, 10.43] 10.425 NaN 1 0.003786 NaN 1
(10.43, 10.435] 10.4 NaN 0 NaN NaN 0
</code></pre>
<p>I want to drop the rows of only when I encounter a NaN case. I want to create a new df_bin, without the NaN occurrences. I've tried:</p>
<pre><code> df_bin=df_bin['X', 'mean'].dropna()
</code></pre>
<p>But this drops all other columns of df_bin and keep only one column.</p>
| 61,966,064
| 2020-05-23T00:47:24.180000
| 1
| null | 1
| 132
|
python|pandas
|
<p>Let us try pull all the <code>mean</code> out, find the null </p>
<pre><code>df_bin=df_bin_temp[df_bin_temp.loc[:,pd.IndexSlice[:,'mean']].notnull().all(1)]
X Y
m s i m s i
0 10.425 NaN 1 0.003786 NaN 1
</code></pre>
<p>Or we do </p>
<pre><code>df_bin=df_bin_temp.dropna(subset=df_bin_temp.loc[:,pd.IndexSlice[:,'m']].columns)
X Y
m s i m s i
0 10.425 NaN 1 0.003786 NaN 1
</code></pre>
| 2020-05-23T00:51:55.700000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.stack.html
|
pandas.DataFrame.stack#
pandas.DataFrame.stack#
DataFrame.stack(level=- 1, dropna=True)[source]#
Let us try pull all the mean out, find the null
df_bin=df_bin_temp[df_bin_temp.loc[:,pd.IndexSlice[:,'mean']].notnull().all(1)]
X Y
m s i m s i
0 10.425 NaN 1 0.003786 NaN 1
Or we do
df_bin=df_bin_temp.dropna(subset=df_bin_temp.loc[:,pd.IndexSlice[:,'m']].columns)
X Y
m s i m s i
0 10.425 NaN 1 0.003786 NaN 1
Stack the prescribed level(s) from columns to index.
Return a reshaped DataFrame or Series having a multi-level
index with one or more new inner-most levels compared to the current
DataFrame. The new inner-most levels are created by pivoting the
columns of the current dataframe:
if the columns have a single level, the output is a Series;
if the columns have multiple levels, the new index
level(s) is (are) taken from the prescribed level(s) and
the output is a DataFrame.
Parameters
levelint, str, list, default -1Level(s) to stack from the column axis onto the index
axis, defined as one index or label, or a list of indices
or labels.
dropnabool, default TrueWhether to drop rows in the resulting Frame/Series with
missing values. Stacking a column level onto the index
axis can create combinations of index and column values
that are missing from the original dataframe. See Examples
section.
Returns
DataFrame or SeriesStacked dataframe or series.
See also
DataFrame.unstackUnstack prescribed level(s) from index axis onto column axis.
DataFrame.pivotReshape dataframe from long format to wide format.
DataFrame.pivot_tableCreate a spreadsheet-style pivot table as a DataFrame.
Notes
The function is named by analogy with a collection of books
being reorganized from being side by side on a horizontal
position (the columns of the dataframe) to being stacked
vertically on top of each other (in the index of the
dataframe).
Reference the user guide for more examples.
Examples
Single level columns
>>> df_single_level_cols = pd.DataFrame([[0, 1], [2, 3]],
... index=['cat', 'dog'],
... columns=['weight', 'height'])
Stacking a dataframe with a single level column axis returns a Series:
>>> df_single_level_cols
weight height
cat 0 1
dog 2 3
>>> df_single_level_cols.stack()
cat weight 0
height 1
dog weight 2
height 3
dtype: int64
Multi level columns: simple case
>>> multicol1 = pd.MultiIndex.from_tuples([('weight', 'kg'),
... ('weight', 'pounds')])
>>> df_multi_level_cols1 = pd.DataFrame([[1, 2], [2, 4]],
... index=['cat', 'dog'],
... columns=multicol1)
Stacking a dataframe with a multi-level column axis:
>>> df_multi_level_cols1
weight
kg pounds
cat 1 2
dog 2 4
>>> df_multi_level_cols1.stack()
weight
cat kg 1
pounds 2
dog kg 2
pounds 4
Missing values
>>> multicol2 = pd.MultiIndex.from_tuples([('weight', 'kg'),
... ('height', 'm')])
>>> df_multi_level_cols2 = pd.DataFrame([[1.0, 2.0], [3.0, 4.0]],
... index=['cat', 'dog'],
... columns=multicol2)
It is common to have missing values when stacking a dataframe
with multi-level columns, as the stacked dataframe typically
has more values than the original dataframe. Missing values
are filled with NaNs:
>>> df_multi_level_cols2
weight height
kg m
cat 1.0 2.0
dog 3.0 4.0
>>> df_multi_level_cols2.stack()
height weight
cat kg NaN 1.0
m 2.0 NaN
dog kg NaN 3.0
m 4.0 NaN
Prescribing the level(s) to be stacked
The first parameter controls which level or levels are stacked:
>>> df_multi_level_cols2.stack(0)
kg m
cat height NaN 2.0
weight 1.0 NaN
dog height NaN 4.0
weight 3.0 NaN
>>> df_multi_level_cols2.stack([0, 1])
cat height m 2.0
weight kg 1.0
dog height m 4.0
weight kg 3.0
dtype: float64
Dropping missing values
>>> df_multi_level_cols3 = pd.DataFrame([[None, 1.0], [2.0, 3.0]],
... index=['cat', 'dog'],
... columns=multicol2)
Note that rows where all values are missing are dropped by
default but this behaviour can be controlled via the dropna
keyword parameter:
>>> df_multi_level_cols3
weight height
kg m
cat NaN 1.0
dog 2.0 3.0
>>> df_multi_level_cols3.stack(dropna=False)
height weight
cat kg NaN NaN
m 1.0 NaN
dog kg NaN 2.0
m 3.0 NaN
>>> df_multi_level_cols3.stack(dropna=True)
height weight
cat m 1.0 NaN
dog kg NaN 2.0
m 3.0 NaN
| 101
| 527
|
dropna ( ) in multi-header dataframe
I created a dataframe using groupby and pd.cut to calculate the mean, std and number of elements inside a bin. I used the agg()and this is the command I used:
df_bin=df.groupby(pd.cut(df.In_X, ranges,include_lowest=True)).agg(['mean', 'std','size'])
df_bin looks like this:
X Y
mean std size mean std size
In_X
(10.424, 10.43] 10.425 NaN 1 0.003786 NaN 1
(10.43, 10.435] 10.4 NaN 0 NaN NaN 0
I want to drop the rows of only when I encounter a NaN case. I want to create a new df_bin, without the NaN occurrences. I've tried:
df_bin=df_bin['X', 'mean'].dropna()
But this drops all other columns of df_bin and keep only one column.
|
63,487,451
|
Export pandas dataframe to excel sheet
|
<p>I have a dictionary of dataframes and need to export them to excel.</p>
<pre><code>mycollection = {'company1': df1, 'company2': df2, 'company3': df3}
</code></pre>
<p>I need to export all 3 dataframes to a single excel file using 3 sheets. Each sheet should be named the company number for e.g. "company3".</p>
<p>There are around 200 dataframes in the <em>mycollection</em> dictionary. I can use <strong>to_excel</strong> method to export data to excel, but that will create 200 excel files.
Is it possible to export dataframe to sheets?</p>
| 63,487,487
| 2020-08-19T12:55:52.070000
| 1
| null | 1
| 165
|
pandas
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.ExcelWriter.html" rel="nofollow noreferrer"><code>ExcelWriter</code></a> with iter dicts and write by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_excel.html" rel="nofollow noreferrer"><code>DataFrame.to_excel</code></a>:</p>
<pre><code>with pd.ExcelWriter('output.xlsx') as writer:
for k, v in mycollection.items():
v.to_excel(writer, sheet_name=k)
</code></pre>
| 2020-08-19T12:57:54.740000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_excel.html
|
pandas.DataFrame.to_excel#
pandas.DataFrame.to_excel#
DataFrame.to_excel(excel_writer, sheet_name='Sheet1', na_rep='', float_format=None, columns=None, header=True, index=True, index_label=None, startrow=0, startcol=0, engine=None, merge_cells=True, encoding=_NoDefault.no_default, inf_rep='inf', verbose=_NoDefault.no_default, freeze_panes=None, storage_options=None)[source]#
Use ExcelWriter with iter dicts and write by DataFrame.to_excel:
with pd.ExcelWriter('output.xlsx') as writer:
for k, v in mycollection.items():
v.to_excel(writer, sheet_name=k)
Write object to an Excel sheet.
To write a single object to an Excel .xlsx file it is only necessary to
specify a target file name. To write to multiple sheets it is necessary to
create an ExcelWriter object with a target file name, and specify a sheet
in the file to write to.
Multiple sheets may be written to by specifying unique sheet_name.
With all data written to the file it is necessary to save the changes.
Note that creating an ExcelWriter object with a file name that already
exists will result in the contents of the existing file being erased.
Parameters
excel_writerpath-like, file-like, or ExcelWriter objectFile path or existing ExcelWriter.
sheet_namestr, default ‘Sheet1’Name of sheet which will contain DataFrame.
na_repstr, default ‘’Missing data representation.
float_formatstr, optionalFormat string for floating point numbers. For example
float_format="%.2f" will format 0.1234 to 0.12.
columnssequence or list of str, optionalColumns to write.
headerbool or list of str, default TrueWrite out the column names. If a list of string is given it is
assumed to be aliases for the column names.
indexbool, default TrueWrite row names (index).
index_labelstr or sequence, optionalColumn label for index column(s) if desired. If not specified, and
header and index are True, then the index names are used. A
sequence should be given if the DataFrame uses MultiIndex.
startrowint, default 0Upper left cell row to dump data frame.
startcolint, default 0Upper left cell column to dump data frame.
enginestr, optionalWrite engine to use, ‘openpyxl’ or ‘xlsxwriter’. You can also set this
via the options io.excel.xlsx.writer, io.excel.xls.writer, and
io.excel.xlsm.writer.
Deprecated since version 1.2.0: As the xlwt package is no longer
maintained, the xlwt engine will be removed in a future version
of pandas.
merge_cellsbool, default TrueWrite MultiIndex and Hierarchical Rows as merged cells.
encodingstr, optionalEncoding of the resulting excel file. Only necessary for xlwt,
other writers support unicode natively.
Deprecated since version 1.5.0: This keyword was not used.
inf_repstr, default ‘inf’Representation for infinity (there is no native representation for
infinity in Excel).
verbosebool, default TrueDisplay more information in the error logs.
Deprecated since version 1.5.0: This keyword was not used.
freeze_panestuple of int (length 2), optionalSpecifies the one-based bottommost row and rightmost column that
is to be frozen.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
See also
to_csvWrite DataFrame to a comma-separated values (csv) file.
ExcelWriterClass for writing DataFrame objects into excel sheets.
read_excelRead an Excel file into a pandas DataFrame.
read_csvRead a comma-separated values (csv) file into DataFrame.
io.formats.style.Styler.to_excelAdd styles to Excel sheet.
Notes
For compatibility with to_csv(),
to_excel serializes lists and dicts to strings before writing.
Once a workbook has been saved it is not possible to write further
data without rewriting the whole workbook.
Examples
Create, write to and save a workbook:
>>> df1 = pd.DataFrame([['a', 'b'], ['c', 'd']],
... index=['row 1', 'row 2'],
... columns=['col 1', 'col 2'])
>>> df1.to_excel("output.xlsx")
To specify the sheet name:
>>> df1.to_excel("output.xlsx",
... sheet_name='Sheet_name_1')
If you wish to write to more than one sheet in the workbook, it is
necessary to specify an ExcelWriter object:
>>> df2 = df1.copy()
>>> with pd.ExcelWriter('output.xlsx') as writer:
... df1.to_excel(writer, sheet_name='Sheet_name_1')
... df2.to_excel(writer, sheet_name='Sheet_name_2')
ExcelWriter can also be used to append to an existing Excel file:
>>> with pd.ExcelWriter('output.xlsx',
... mode='a') as writer:
... df.to_excel(writer, sheet_name='Sheet_name_3')
To set the library that is used to write the Excel file,
you can pass the engine keyword (the default engine is
automatically chosen depending on the file extension):
>>> df1.to_excel('output1.xlsx', engine='xlsxwriter')
| 382
| 574
|
Export pandas dataframe to excel sheet
I have a dictionary of dataframes and need to export them to excel.
mycollection = {'company1': df1, 'company2': df2, 'company3': df3}
I need to export all 3 dataframes to a single excel file using 3 sheets. Each sheet should be named the company number for e.g. "company3".
There are around 200 dataframes in the mycollection dictionary. I can use to_excel method to export data to excel, but that will create 200 excel files.
Is it possible to export dataframe to sheets?
|
62,915,504
|
How to calculate a weighted average in Python for each unique value in two columns?
|
<p>The picture below shows a few lines of printed lists I have in Python. I would like to get: a list of unique values of boroughs, a corresponding list of unique values of years, and a list of weighted averages of "averages" with "nobs" as weights but for each borough and each year (the variable "type" indicates if there was just one, two or three types in a specific year in a borough).</p>
<p>I know how to get a weighted average using the entire lists:</p>
<pre><code>weighted_avg = np.average(average, weights=nobs)
</code></pre>
<p>But I don't know how to calculate one for each unique borough-year.</p>
<p><a href="https://i.stack.imgur.com/5d0zk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5d0zk.png" alt="enter image description here" /></a></p>
<p>I'm new to Python, please help if you know how to do it.</p>
| 62,916,349
| 2020-07-15T13:01:33.597000
| 1
| null | 0
| 165
|
python|pandas
|
<p>Assuming that the 'type' column doesn't affect your calculations, you can get the average using <code>groupby</code>. Here's the data:</p>
<pre><code>df = pd.DataFrame({'borough': ['b1', 'b2']*6, 'year': [2008, 2009, 2010, 2011]*3,
'average': np.random.randint(low=100, high=200, size=12),
'nobs': np.random.randint(low=1, high=40, size=12)})
print(df):
borough year average nobs
0 b1 2008 166 1
1 b2 2009 177 35
2 b1 2010 114 27
3 b2 2011 187 18
4 b1 2008 193 2
5 b2 2009 105 27
6 b1 2010 114 36
7 b2 2011 144 3
8 b1 2008 114 39
9 b2 2009 157 6
10 b1 2010 133 17
11 b2 2011 176 12
</code></pre>
<p>we add a new column which is the product of the average and nobs columns:</p>
<pre><code>df['average x nobs'] = df['average']*df['nobs']
newdf = pd.DataFrame({'weighted average': df.groupby(['borough', 'year']).sum()['average x nobs']/df.groupby(['borough', 'year']).sum()['nobs']})
print(newdf):
weighted average
borough year
b1 2008 119.000000
2010 118.037500
b2 2009 146.647059
2011 179.090909
</code></pre>
| 2020-07-15T13:45:11.210000
| 1
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Assuming that the 'type' column doesn't affect your calculations, you can get the average using groupby. Here's the data:
df = pd.DataFrame({'borough': ['b1', 'b2']*6, 'year': [2008, 2009, 2010, 2011]*3,
'average': np.random.randint(low=100, high=200, size=12),
'nobs': np.random.randint(low=1, high=40, size=12)})
print(df):
borough year average nobs
0 b1 2008 166 1
1 b2 2009 177 35
2 b1 2010 114 27
3 b2 2011 187 18
4 b1 2008 193 2
5 b2 2009 105 27
6 b1 2010 114 36
7 b2 2011 144 3
8 b1 2008 114 39
9 b2 2009 157 6
10 b1 2010 133 17
11 b2 2011 176 12
we add a new column which is the product of the average and nobs columns:
df['average x nobs'] = df['average']*df['nobs']
newdf = pd.DataFrame({'weighted average': df.groupby(['borough', 'year']).sum()['average x nobs']/df.groupby(['borough', 'year']).sum()['nobs']})
print(newdf):
weighted average
borough year
b1 2008 119.000000
2010 118.037500
b2 2009 146.647059
2011 179.090909
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 32
| 1,264
|
How to calculate a weighted average in Python for each unique value in two columns?
The picture below shows a few lines of printed lists I have in Python. I would like to get: a list of unique values of boroughs, a corresponding list of unique values of years, and a list of weighted averages of "averages" with "nobs" as weights but for each borough and each year (the variable "type" indicates if there was just one, two or three types in a specific year in a borough).
I know how to get a weighted average using the entire lists:
weighted_avg = np.average(average, weights=nobs)
But I don't know how to calculate one for each unique borough-year.
I'm new to Python, please help if you know how to do it.
|
62,921,430
|
pandas groupby latest observation for each group
|
<p>I have a panel dataframe (ID and time) and want to collect the recent (latest) rows for each ID. Here is the table:</p>
<p><code>df = pd.DataFrame({'ID': [1,1,2,3] , 'Year': [2018,2019,2019,2020] , 'Var1':list("abcd") , 'Var2': list("efgh")})</code></p>
<p><a href="https://i.stack.imgur.com/bw03I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bw03I.png" alt="enter image description here" /></a></p>
<p>and the end result would be:</p>
<p><a href="https://i.stack.imgur.com/qqJzd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qqJzd.png" alt="enter image description here" /></a></p>
| 62,921,493
| 2020-07-15T18:23:26.007000
| 2
| null | 0
| 172
|
python|pandas
|
<p>Use drop_duplicates:</p>
<pre><code>df.sort_values('Year').drop_duplicates('ID', keep='last')
</code></pre>
<p>Output:</p>
<pre><code> ID Year Var1 Var2
1 1 2019 b f
2 2 2019 c g
3 3 2020 d h
</code></pre>
| 2020-07-15T18:27:58.253000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.last.html
|
pandas.core.groupby.GroupBy.last#
pandas.core.groupby.GroupBy.last#
final GroupBy.last(numeric_only=False, min_count=- 1)[source]#
Compute the last non-null entry of each column.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
min_countint, default -1The required number of valid values to perform the operation. If fewer
than min_count non-NA values are present the result will be NA.
Returns
Series or DataFrameLast non-null of values within each group.
See also
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
Use drop_duplicates:
df.sort_values('Year').drop_duplicates('ID', keep='last')
Output:
ID Year Var1 Var2
1 1 2019 b f
2 2 2019 c g
3 3 2020 d h
DataFrame.core.groupby.GroupBy.firstCompute the first non-null entry of each column.
DataFrame.core.groupby.GroupBy.nthTake the nth row from each group.
Examples
>>> df = pd.DataFrame(dict(A=[1, 1, 3], B=[5, None, 6], C=[1, 2, 3]))
>>> df.groupby("A").last()
B C
A
1 5.0 2
3 6.0 3
| 667
| 843
|
pandas groupby latest observation for each group
I have a panel dataframe (ID and time) and want to collect the recent (latest) rows for each ID. Here is the table:
df = pd.DataFrame({'ID': [1,1,2,3] , 'Year': [2018,2019,2019,2020] , 'Var1':list("abcd") , 'Var2': list("efgh")})
and the end result would be:
|
69,559,627
|
Python dataframe unexpected display error using loc
|
<p>I'm creating an additional column "Total_Count" to store the cumulative count record by Site and Count_Record column information. My coding is almost done for total cumulative count. However, the Total_Count column is shift for a specific Card as below. Could someone help with code modification, thank you!</p>
<p><strong>Expected Output:</strong></p>
<p><a href="https://i.stack.imgur.com/k0iXY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k0iXY.png" alt="enter image description here" /></a></p>
<p><strong>Current Output:</strong></p>
<p><a href="https://i.stack.imgur.com/rAsSr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rAsSr.png" alt="enter image description here" /></a></p>
<p><strong>My Code:</strong></p>
<pre><code>import pandas as pd
df1 = pd.DataFrame(columns=['site', 'card', 'date', 'count_record'],
data=[['A', 'C1', '12-Oct', 5],
['A', 'C1', '13-Oct', 10],
['A', 'C1', '14-Oct', 18],
['A', 'C1', '15-Oct', 21],
['A', 'C1', '16-Oct', 29],
['B', 'C2', '12-Oct', 11],
['A', 'C2', '13-Oct', 2],
['A', 'C2', '14-Oct', 7],
['A', 'C2', '15-Oct', 13],
['B', 'C2', '16-Oct', 4]])
df_append_temp=[]
total = 0
preCard = ''
preSite = ''
preCount = 0
for pc in df1['card'].unique():
df2 = df1[df1['card'] == pc].sort_values(['date'])
total = 0
for i in range(0, len(df2)):
site = df2.iloc[i]['site']
count = df2.iloc[i]['count_record']
if site == preSite:
total += (count - preCount)
else:
total += count
preCount = count
preSite = site
df2.loc[i, 'Total_Count'] = total #something wrong using loc here
df_append_temp.append(df2)
df3 = pd.DataFrame(pd.concat(df_append_temp), columns=df2.columns)
df3
</code></pre>
| 69,560,005
| 2021-10-13T17:10:09.167000
| 1
| null | 1
| 174
|
python|pandas
|
<p>To modify the current implementation we can use <a href="https://pandas.pydata.org/docs/user_guide/groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> to create our <code>df2</code> which allows us to <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>apply</code></a> a function to each grouped DataFrame to create the new column. This should offer similar performance as the current implementation but produce correctly aligned Series:</p>
<pre><code>def calc_total_count(df2: pd.DataFrame) -> pd.Series:
total = 0
pre_count = 0
pre_site = ''
lst = []
for c, s in zip(df2['count_record'], df2['site']):
if s == pre_site:
total += (c - pre_count)
else:
total += c
pre_count = c
pre_site = s
lst.append(total)
return pd.Series(lst, index=df2.index, name='Total_Count')
df3 = pd.concat([
df1,
df1.sort_values('date').groupby('card').apply(calc_total_count).droplevel(0)
], axis=1)
</code></pre>
<p>Alternatively we can use <code>groupby</code>, then within groups <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.shift.html" rel="nofollow noreferrer"><code>Series.shift</code></a> to get the previous <code>site</code>, and <code>count_record</code>. Then use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> to conditionally determine each row's value and <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.cumsum.html" rel="nofollow noreferrer"><code>ndarray.cumsum</code></a> to calculate the cumulative total of the resulting values:</p>
<pre><code>def calc_total_count(df2: pd.DataFrame) -> pd.Series:
return pd.Series(
np.where(df2['site'] == df2['site'].shift(),
df2['count_record'] - df2['count_record'].shift(fill_value=0),
df2['count_record']).cumsum(),
index=df2.index,
name='Total_Count'
)
df3 = pd.concat([
df1,
df1.sort_values('date').groupby('card').apply(calc_total_count).droplevel(0)
], axis=1)
</code></pre>
<p>Either approach produces <code>df3</code>:</p>
<pre><code> site card date count_record Total_Count
0 A C1 12-Oct 5 5
1 A C1 13-Oct 10 10
2 A C1 14-Oct 18 18
3 A C1 15-Oct 21 21
4 A C1 16-Oct 29 29
5 B C2 12-Oct 11 11
6 A C2 13-Oct 2 13
7 A C2 14-Oct 7 18
8 A C2 15-Oct 13 24
9 B C2 16-Oct 4 28
</code></pre>
<hr />
<p>Setup and imports:</p>
<pre><code>import numpy as np # only needed if using np.where
import pandas as pd
df1 = pd.DataFrame(columns=['site', 'card', 'date', 'count_record'],
data=[['A', 'C1', '12-Oct', 5],
['A', 'C1', '13-Oct', 10],
['A', 'C1', '14-Oct', 18],
['A', 'C1', '15-Oct', 21],
['A', 'C1', '16-Oct', 29],
['B', 'C2', '12-Oct', 11],
['A', 'C2', '13-Oct', 2],
['A', 'C2', '14-Oct', 7],
['A', 'C2', '15-Oct', 13],
['B', 'C2', '16-Oct', 4]])
</code></pre>
| 2021-10-13T17:41:10.320000
| 1
|
https://pandas.pydata.org/docs/whatsnew/v1.5.0.html
|
To modify the current implementation we can use groupby to create our df2 which allows us to apply a function to each grouped DataFrame to create the new column. This should offer similar performance as the current implementation but produce correctly aligned Series:
def calc_total_count(df2: pd.DataFrame) -> pd.Series:
total = 0
pre_count = 0
pre_site = ''
lst = []
for c, s in zip(df2['count_record'], df2['site']):
if s == pre_site:
total += (c - pre_count)
else:
total += c
pre_count = c
pre_site = s
lst.append(total)
return pd.Series(lst, index=df2.index, name='Total_Count')
df3 = pd.concat([
df1,
df1.sort_values('date').groupby('card').apply(calc_total_count).droplevel(0)
], axis=1)
Alternatively we can use groupby, then within groups Series.shift to get the previous site, and count_record. Then use np.where to conditionally determine each row's value and ndarray.cumsum to calculate the cumulative total of the resulting values:
def calc_total_count(df2: pd.DataFrame) -> pd.Series:
return pd.Series(
np.where(df2['site'] == df2['site'].shift(),
df2['count_record'] - df2['count_record'].shift(fill_value=0),
df2['count_record']).cumsum(),
index=df2.index,
name='Total_Count'
)
df3 = pd.concat([
df1,
df1.sort_values('date').groupby('card').apply(calc_total_count).droplevel(0)
], axis=1)
Either approach produces df3:
site card date count_record Total_Count
0 A C1 12-Oct 5 5
1 A C1 13-Oct 10 10
2 A C1 14-Oct 18 18
3 A C1 15-Oct 21 21
4 A C1 16-Oct 29 29
5 B C2 12-Oct 11 11
6 A C2 13-Oct 2 13
7 A C2 14-Oct 7 18
8 A C2 15-Oct 13 24
9 B C2 16-Oct 4 28
Setup and imports:
import numpy as np # only needed if using np.where
import pandas as pd
df1 = pd.DataFrame(columns=['site', 'card', 'date', 'count_record'],
data=[['A', 'C1', '12-Oct', 5],
['A', 'C1', '13-Oct', 10],
['A', 'C1', '14-Oct', 18],
['A', 'C1', '15-Oct', 21],
['A', 'C1', '16-Oct', 29],
['B', 'C2', '12-Oct', 11],
['A', 'C2', '13-Oct', 2],
['A', 'C2', '14-Oct', 7],
['A', 'C2', '15-Oct', 13],
['B', 'C2', '16-Oct', 4]])
| 0
| 2,708
|
Python dataframe unexpected display error using loc
I'm creating an additional column "Total_Count" to store the cumulative count record by Site and Count_Record column information. My coding is almost done for total cumulative count. However, the Total_Count column is shift for a specific Card as below. Could someone help with code modification, thank you!
Expected Output:
Current Output:
My Code:
import pandas as pd
df1 = pd.DataFrame(columns=['site', 'card', 'date', 'count_record'],
data=[['A', 'C1', '12-Oct', 5],
['A', 'C1', '13-Oct', 10],
['A', 'C1', '14-Oct', 18],
['A', 'C1', '15-Oct', 21],
['A', 'C1', '16-Oct', 29],
['B', 'C2', '12-Oct', 11],
['A', 'C2', '13-Oct', 2],
['A', 'C2', '14-Oct', 7],
['A', 'C2', '15-Oct', 13],
['B', 'C2', '16-Oct', 4]])
df_append_temp=[]
total = 0
preCard = ''
preSite = ''
preCount = 0
for pc in df1['card'].unique():
df2 = df1[df1['card'] == pc].sort_values(['date'])
total = 0
for i in range(0, len(df2)):
site = df2.iloc[i]['site']
count = df2.iloc[i]['count_record']
if site == preSite:
total += (count - preCount)
else:
total += count
preCount = count
preSite = site
df2.loc[i, 'Total_Count'] = total #something wrong using loc here
df_append_temp.append(df2)
df3 = pd.DataFrame(pd.concat(df_append_temp), columns=df2.columns)
df3
|
59,833,940
|
Merging 3 datasets together
|
<p>I have 3 datasets (csv.) that I need to merge together inorder to make a motion chart.
they all have countries as the columns and the year as rows.</p>
<p>The other two datasets look the same except one is population and the other is income. I've tried looking around to see what I can find to get the data set out like I'd like but cant seem to find anything. </p>
<p>I tried using pd.concat but it just lists it all one after the other not in separate columns. </p>
<p>merge all 3 data sets in preperation for making motionchart using pd.concat
<code>mc_data = pd.concat([df2, pop_data3, income_data3], sort = True)</code></p>
<p>Any sort of help would be appreciated </p>
<p>EDIT: I have used the code as suggested, however I get a heap of NaN values that shouldn't be there</p>
<p><code>mc_data = pd.concat([df2, pop_data3, income_data3], axis = 1, keys = ['df2', 'pop_data3', 'income_data3'])</code></p>
<p>EDIT2: when I run .info and .index on them i get these results. Could it be to do with the data types? or the column entries?</p>
<p><img src="https://i.stack.imgur.com/dSCt6.png" alt="python output"></p>
| 59,834,032
| 2020-01-21T03:50:25.957000
| 1
| null | 0
| 178
|
python|pandas
|
<p>From <a href="https://stackoverflow.com/a/23600844">this answer</a>:</p>
<p>You can do it with <code>concat</code> (the <code>keys</code> argument will create the hierarchical columns index): </p>
<pre><code>pd.concat([df2, pop_data3, income_data3], axis=1, keys=['df2', 'pop_data3', 'income_data3'])
</code></pre>
| 2020-01-21T04:04:39.020000
| 1
|
https://pandas.pydata.org/docs/user_guide/merging.html
|
Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
From this answer:
You can do it with concat (the keys argument will create the hierarchical columns index):
pd.concat([df2, pop_data3, income_data3], axis=1, keys=['df2', 'pop_data3', 'income_data3'])
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() (and therefore
append()) makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 281
| 483
|
Merging 3 datasets together
I have 3 datasets (csv.) that I need to merge together inorder to make a motion chart.
they all have countries as the columns and the year as rows.
The other two datasets look the same except one is population and the other is income. I've tried looking around to see what I can find to get the data set out like I'd like but cant seem to find anything.
I tried using pd.concat but it just lists it all one after the other not in separate columns.
merge all 3 data sets in preperation for making motionchart using pd.concat
mc_data = pd.concat([df2, pop_data3, income_data3], sort = True)
Any sort of help would be appreciated
EDIT: I have used the code as suggested, however I get a heap of NaN values that shouldn't be there
mc_data = pd.concat([df2, pop_data3, income_data3], axis = 1, keys = ['df2', 'pop_data3', 'income_data3'])
EDIT2: when I run .info and .index on them i get these results. Could it be to do with the data types? or the column entries?
|
68,206,354
|
How to save pandas series in a single row instead of column
|
<p>By default pandas series is stored in a single column when we do</p>
<pre><code>series.to_csv('series.csv')
</code></pre>
<p>but I want to store it in a single row like I want to store transpose of it.
I tried</p>
<pre><code>series.T.to_csv('series.csv')
</code></pre>
<p>but still output in csv is in a single column and in multiple rows, but I need in single rows and multiple columns</p>
| 68,206,394
| 2021-07-01T07:54:07.693000
| 1
| null | 0
| 187
|
python|pandas
|
<p>Try via <code>to_frame()</code>:</p>
<pre><code>series.to_frame().T.to_csv('series.csv',index=False)
#If you don't want to include header then pass header=False in the to_csv() method
</code></pre>
| 2021-07-01T07:56:41.060000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.squeeze.html
|
pandas.DataFrame.squeeze#
pandas.DataFrame.squeeze#
DataFrame.squeeze(axis=None)[source]#
Squeeze 1 dimensional axis objects into scalars.
Series or DataFrames with a single element are squeezed to a scalar.
DataFrames with a single column or a single row are squeezed to a
Try via to_frame():
series.to_frame().T.to_csv('series.csv',index=False)
#If you don't want to include header then pass header=False in the to_csv() method
Series. Otherwise the object is unchanged.
This method is most useful when you don’t know if your
object is a Series or DataFrame, but you do know it has just a single
column. In that case you can safely call squeeze to ensure you have a
Series.
Parameters
axis{0 or ‘index’, 1 or ‘columns’, None}, default NoneA specific axis to squeeze. By default, all length-1 axes are
squeezed. For Series this parameter is unused and defaults to None.
Returns
DataFrame, Series, or scalarThe projection after squeezing axis or all the axes.
See also
Series.ilocInteger-location based indexing for selecting scalars.
DataFrame.ilocInteger-location based indexing for selecting Series.
Series.to_frameInverse of DataFrame.squeeze for a single-column DataFrame.
Examples
>>> primes = pd.Series([2, 3, 5, 7])
Slicing might produce a Series with a single value:
>>> even_primes = primes[primes % 2 == 0]
>>> even_primes
0 2
dtype: int64
>>> even_primes.squeeze()
2
Squeezing objects with more than one value in every axis does nothing:
>>> odd_primes = primes[primes % 2 == 1]
>>> odd_primes
1 3
2 5
3 7
dtype: int64
>>> odd_primes.squeeze()
1 3
2 5
3 7
dtype: int64
Squeezing is even more effective when used with DataFrames.
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=['a', 'b'])
>>> df
a b
0 1 2
1 3 4
Slicing a single column will produce a DataFrame with the columns
having only one value:
>>> df_a = df[['a']]
>>> df_a
a
0 1
1 3
So the columns can be squeezed down, resulting in a Series:
>>> df_a.squeeze('columns')
0 1
1 3
Name: a, dtype: int64
Slicing a single row from a single column will produce a single
scalar DataFrame:
>>> df_0a = df.loc[df.index < 1, ['a']]
>>> df_0a
a
0 1
Squeezing the rows produces a single scalar Series:
>>> df_0a.squeeze('rows')
a 1
Name: 0, dtype: int64
Squeezing all axes will project directly into a scalar:
>>> df_0a.squeeze()
1
| 278
| 434
|
How to save pandas series in a single row instead of column
By default pandas series is stored in a single column when we do
series.to_csv('series.csv')
but I want to store it in a single row like I want to store transpose of it.
I tried
series.T.to_csv('series.csv')
but still output in csv is in a single column and in multiple rows, but I need in single rows and multiple columns
|
64,662,708
|
Pandas how to make group by without losing other column information
|
<p>I've a df like this</p>
<pre><code> source destination weight partition
0 1 2 193 7
1 1 22 2172 7
2 2 3 188 7
3 2 1 193 7
4 2 4 403 7
... ... ... ... ...
8865 3351 3352 719 4
8866 3351 2961 6009 4
8867 3352 3351 719 4
8868 3353 1540 128 2
8869 3353 1377 198 2
</code></pre>
<p>which is a edge list of a graph, partition is source vertex's partition information.
I want to groupby source with their destinations to find all neighbours
I tried this</p>
<pre class="lang-py prettyprint-override"><code>group = result.groupby("source")["destination"].apply(list).reset_index(name="neighbours")
</code></pre>
<p>and result is:</p>
<pre><code> source neighbours
0 1 [2, 22]
1 2 [3, 1, 4]
2 3 [2]
3 4 [21, 2, 5]
4 5 [4, 8]
... ... ...
3348 3349 [3350, 3345, 3324]
3349 3350 [3349]
3350 3351 [2896, 3352, 2961]
3351 3352 [3351]
3352 3353 [1540, 1377]
</code></pre>
<p>but here as you can see I'm losing partition information
is there a way to keep this column as well?</p>
| 64,662,732
| 2020-11-03T12:18:24.950000
| 1
| null | 2
| 192
|
python|pandas
|
<p>If need grouping by 2 columns use:</p>
<pre><code>group = (result.groupby(["source", "partition"])["destination"]
.apply(list)
.reset_index(name="neighbours"))
</code></pre>
<p>If need first value of column <code>partition</code> aggregate by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.agg.html" rel="nofollow noreferrer"><code>GroupBy.agg</code></a> with <code>first</code>:</p>
<pre><code>group = (result.groupby("source")
.agg({'partition':'first',"destination": list})
.reset_index(name="neighbours"))
</code></pre>
| 2020-11-03T12:20:02.157000
| 1
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
If need grouping by 2 columns use:
group = (result.groupby(["source", "partition"])["destination"]
.apply(list)
.reset_index(name="neighbours"))
If need first value of column partition aggregate by GroupBy.agg with first:
group = (result.groupby("source")
.agg({'partition':'first',"destination": list})
.reset_index(name="neighbours"))
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 147
| 545
|
Pandas how to make group by without losing other column information
I've a df like this
source destination weight partition
0 1 2 193 7
1 1 22 2172 7
2 2 3 188 7
3 2 1 193 7
4 2 4 403 7
... ... ... ... ...
8865 3351 3352 719 4
8866 3351 2961 6009 4
8867 3352 3351 719 4
8868 3353 1540 128 2
8869 3353 1377 198 2
which is a edge list of a graph, partition is source vertex's partition information.
I want to groupby source with their destinations to find all neighbours
I tried this
group = result.groupby("source")["destination"].apply(list).reset_index(name="neighbours")
and result is:
source neighbours
0 1 [2, 22]
1 2 [3, 1, 4]
2 3 [2]
3 4 [21, 2, 5]
4 5 [4, 8]
... ... ...
3348 3349 [3350, 3345, 3324]
3349 3350 [3349]
3350 3351 [2896, 3352, 2961]
3351 3352 [3351]
3352 3353 [1540, 1377]
but here as you can see I'm losing partition information
is there a way to keep this column as well?
|
65,262,085
|
Pandas df.loc indexing with multiple conditions and retrieving value
|
<p>I can't understand this example in the documentation for version 1.1.4</p>
<pre><code>In [45]: df1
Out[45]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
c 1.024180 0.569605 0.875906 -2.211372
d 0.974466 -2.006747 -0.410001 -0.078638
e 0.545952 -1.219217 -1.226825 0.769804
f -1.281247 -0.727707 -0.121306 -0.097883
</code></pre>
<p>Then it slices it using labels</p>
<pre><code>In [50]: df1.loc[:, df1.loc['a'] > 0]
Out[50]:
A
a 0.132003
b 1.130127
c 1.024180
d 0.974466
e 0.545952
f -1.281247
</code></pre>
<p>But the value at index 'f' is less than zero.</p>
<p>And the condition is on index 'a' so why is it returning the entire first column?</p>
| 65,262,179
| 2020-12-12T05:59:11.280000
| 1
| null | 0
| 196
|
python|pandas
|
<p>Let us look at the below specific piece of code first. So when you do,</p>
<pre><code>df.loc['a']>0
</code></pre>
<p>This picks all the values of <code>index</code> 'a' which satisfies this condition.In this case, only column A's value corresponding to index 'a' is greater than 0.</p>
<p>Output:</p>
<pre><code>A True
B False
C False
D False
Name: a, dtype: bool
</code></pre>
<p>Now with this command <code>df.loc[:,df.loc['a']>0</code>, what you are essentially doing is below,</p>
<pre><code>df.loc['a':'f',[True,False,False,False]]
</code></pre>
<p>You are asking here to pick the first column of the df for all indexes. In your df, you are asking for just column 'A' here, so it prints.</p>
<pre><code> A
a 0.132003
b 1.130127
c 1.024180
d 0.974466
e 0.545952
f -1.281247
</code></pre>
<p>Look at the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer">pandas loc</a> for details on how <code>df.loc</code> works</p>
| 2020-12-12T06:19:10.810000
| 1
|
https://pandas.pydata.org/docs/user_guide/indexing.html
|
Indexing and selecting data#
Indexing and selecting data#
The axis labeling information in pandas objects serves many purposes:
Identifies data (i.e. provides metadata) using known indicators,
Let us look at the below specific piece of code first. So when you do,
df.loc['a']>0
This picks all the values of index 'a' which satisfies this condition.In this case, only column A's value corresponding to index 'a' is greater than 0.
Output:
A True
B False
C False
D False
Name: a, dtype: bool
Now with this command df.loc[:,df.loc['a']>0, what you are essentially doing is below,
df.loc['a':'f',[True,False,False,False]]
You are asking here to pick the first column of the df for all indexes. In your df, you are asking for just column 'A' here, so it prints.
A
a 0.132003
b 1.130127
c 1.024180
d 0.974466
e 0.545952
f -1.281247
Look at the pandas loc for details on how df.loc works
important for analysis, visualization, and interactive console display.
Enables automatic and explicit data alignment.
Allows intuitive getting and setting of subsets of the data set.
In this section, we will focus on the final point: namely, how to slice, dice,
and generally get and set subsets of pandas objects. The primary focus will be
on Series and DataFrame as they have received more development attention in
this area.
Note
The Python and NumPy indexing operators [] and attribute operator .
provide quick and easy access to pandas data structures across a wide range
of use cases. This makes interactive work intuitive, as there’s little new
to learn if you already know how to deal with Python dictionaries and NumPy
arrays. However, since the type of the data to be accessed isn’t known in
advance, directly using standard operators has some optimization limits. For
production code, we recommended that you take advantage of the optimized
pandas data access methods exposed in this chapter.
Warning
Whether a copy or a reference is returned for a setting operation, may
depend on the context. This is sometimes called chained assignment and
should be avoided. See Returning a View versus Copy.
See the MultiIndex / Advanced Indexing for MultiIndex and more advanced indexing documentation.
See the cookbook for some advanced strategies.
Different choices for indexing#
Object selection has had a number of user-requested additions in order to
support more explicit location based indexing. pandas now supports three types
of multi-axis indexing.
.loc is primarily label based, but may also be used with a boolean array. .loc will raise KeyError when the items are not found. Allowed inputs are:
A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a
label of the index. This use is not an integer position along the
index.).
A list or array of labels ['a', 'b', 'c'].
A slice object with labels 'a':'f' (Note that contrary to usual Python
slices, both the start and the stop are included, when present in the
index! See Slicing with labels
and Endpoints are inclusive.)
A boolean array (any NA values will be treated as False).
A callable function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
See more at Selection by Label.
.iloc is primarily integer position based (from 0 to
length-1 of the axis), but may also be used with a boolean
array. .iloc will raise IndexError if a requested
indexer is out-of-bounds, except slice indexers which allow
out-of-bounds indexing. (this conforms with Python/NumPy slice
semantics). Allowed inputs are:
An integer e.g. 5.
A list or array of integers [4, 3, 0].
A slice object with ints 1:7.
A boolean array (any NA values will be treated as False).
A callable function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
See more at Selection by Position,
Advanced Indexing and Advanced
Hierarchical.
.loc, .iloc, and also [] indexing can accept a callable as indexer. See more at Selection By Callable.
Getting values from an object with multi-axes selection uses the following
notation (using .loc as an example, but the following applies to .iloc as
well). Any of the axes accessors may be the null slice :. Axes left out of
the specification are assumed to be :, e.g. p.loc['a'] is equivalent to
p.loc['a', :].
Object Type
Indexers
Series
s.loc[indexer]
DataFrame
df.loc[row_indexer,column_indexer]
Basics#
As mentioned when introducing the data structures in the last section, the primary function of indexing with [] (a.k.a. __getitem__
for those familiar with implementing class behavior in Python) is selecting out
lower-dimensional slices. The following table shows return type values when
indexing pandas objects with []:
Object Type
Selection
Return Value Type
Series
series[label]
scalar value
DataFrame
frame[colname]
Series corresponding to colname
Here we construct a simple time series data set to use for illustrating the
indexing functionality:
In [1]: dates = pd.date_range('1/1/2000', periods=8)
In [2]: df = pd.DataFrame(np.random.randn(8, 4),
...: index=dates, columns=['A', 'B', 'C', 'D'])
...:
In [3]: df
Out[3]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
Note
None of the indexing functionality is time series specific unless
specifically stated.
Thus, as per above, we have the most basic indexing using []:
In [4]: s = df['A']
In [5]: s[dates[5]]
Out[5]: -0.6736897080883706
You can pass a list of columns to [] to select columns in that order.
If a column is not contained in the DataFrame, an exception will be
raised. Multiple columns can also be set in this manner:
In [6]: df
Out[6]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
In [7]: df[['B', 'A']] = df[['A', 'B']]
In [8]: df
Out[8]:
A B C D
2000-01-01 -0.282863 0.469112 -1.509059 -1.135632
2000-01-02 -0.173215 1.212112 0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929 1.071804
2000-01-04 -0.706771 0.721555 -1.039575 0.271860
2000-01-05 0.567020 -0.424972 0.276232 -1.087401
2000-01-06 0.113648 -0.673690 -1.478427 0.524988
2000-01-07 0.577046 0.404705 -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312 0.844885
You may find this useful for applying a transform (in-place) to a subset of the
columns.
Warning
pandas aligns all AXES when setting Series and DataFrame from .loc, and .iloc.
This will not modify df because the column alignment is before value assignment.
In [9]: df[['A', 'B']]
Out[9]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647
In [10]: df.loc[:, ['B', 'A']] = df[['A', 'B']]
In [11]: df[['A', 'B']]
Out[11]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647
The correct way to swap column values is by using raw values:
In [12]: df.loc[:, ['B', 'A']] = df[['A', 'B']].to_numpy()
In [13]: df[['A', 'B']]
Out[13]:
A B
2000-01-01 0.469112 -0.282863
2000-01-02 1.212112 -0.173215
2000-01-03 -0.861849 -2.104569
2000-01-04 0.721555 -0.706771
2000-01-05 -0.424972 0.567020
2000-01-06 -0.673690 0.113648
2000-01-07 0.404705 0.577046
2000-01-08 -0.370647 -1.157892
Attribute access#
You may access an index on a Series or column on a DataFrame directly
as an attribute:
In [14]: sa = pd.Series([1, 2, 3], index=list('abc'))
In [15]: dfa = df.copy()
In [16]: sa.b
Out[16]: 2
In [17]: dfa.A
Out[17]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
In [18]: sa.a = 5
In [19]: sa
Out[19]:
a 5
b 2
c 3
dtype: int64
In [20]: dfa.A = list(range(len(dfa.index))) # ok if A already exists
In [21]: dfa
Out[21]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
In [22]: dfa['A'] = list(range(len(dfa.index))) # use this form to create a new column
In [23]: dfa
Out[23]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
Warning
You can use this access only if the index element is a valid Python identifier, e.g. s.1 is not allowed.
See here for an explanation of valid identifiers.
The attribute will not be available if it conflicts with an existing method name, e.g. s.min is not allowed, but s['min'] is possible.
Similarly, the attribute will not be available if it conflicts with any of the following list: index,
major_axis, minor_axis, items.
In any of these cases, standard indexing will still work, e.g. s['1'], s['min'], and s['index'] will
access the corresponding element or column.
If you are using the IPython environment, you may also use tab-completion to
see these accessible attributes.
You can also assign a dict to a row of a DataFrame:
In [24]: x = pd.DataFrame({'x': [1, 2, 3], 'y': [3, 4, 5]})
In [25]: x.iloc[1] = {'x': 9, 'y': 99}
In [26]: x
Out[26]:
x y
0 1 3
1 9 99
2 3 5
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful;
if you try to use attribute access to create a new column, it creates a new attribute rather than a
new column. In 0.21.0 and later, this will raise a UserWarning:
In [1]: df = pd.DataFrame({'one': [1., 2., 3.]})
In [2]: df.two = [4, 5, 6]
UserWarning: Pandas doesn't allow Series to be assigned into nonexistent columns - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute_access
In [3]: df
Out[3]:
one
0 1.0
1 2.0
2 3.0
Slicing ranges#
The most robust and consistent way of slicing ranges along arbitrary axes is
described in the Selection by Position section
detailing the .iloc method. For now, we explain the semantics of slicing using the [] operator.
With Series, the syntax works exactly as with an ndarray, returning a slice of
the values and the corresponding labels:
In [27]: s[:5]
Out[27]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
Freq: D, Name: A, dtype: float64
In [28]: s[::2]
Out[28]:
2000-01-01 0.469112
2000-01-03 -0.861849
2000-01-05 -0.424972
2000-01-07 0.404705
Freq: 2D, Name: A, dtype: float64
In [29]: s[::-1]
Out[29]:
2000-01-08 -0.370647
2000-01-07 0.404705
2000-01-06 -0.673690
2000-01-05 -0.424972
2000-01-04 0.721555
2000-01-03 -0.861849
2000-01-02 1.212112
2000-01-01 0.469112
Freq: -1D, Name: A, dtype: float64
Note that setting works as well:
In [30]: s2 = s.copy()
In [31]: s2[:5] = 0
In [32]: s2
Out[32]:
2000-01-01 0.000000
2000-01-02 0.000000
2000-01-03 0.000000
2000-01-04 0.000000
2000-01-05 0.000000
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
With DataFrame, slicing inside of [] slices the rows. This is provided
largely as a convenience since it is such a common operation.
In [33]: df[:3]
Out[33]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
In [34]: df[::-1]
Out[34]:
A B C D
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
Selection by label#
Warning
Whether a copy or a reference is returned for a setting operation, may depend on the context.
This is sometimes called chained assignment and should be avoided.
See Returning a View versus Copy.
Warning
.loc is strict when you present slicers that are not compatible (or convertible) with the index type. For example
using integers in a DatetimeIndex. These will raise a TypeError.
In [35]: dfl = pd.DataFrame(np.random.randn(5, 4),
....: columns=list('ABCD'),
....: index=pd.date_range('20130101', periods=5))
....:
In [36]: dfl
Out[36]:
A B C D
2013-01-01 1.075770 -0.109050 1.643563 -1.469388
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
2013-01-05 0.895717 0.805244 -1.206412 2.565646
In [4]: dfl.loc[2:3]
TypeError: cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex'> with these indexers [2] of <type 'int'>
String likes in slicing can be convertible to the type of the index and lead to natural slicing.
In [37]: dfl.loc['20130102':'20130104']
Out[37]:
A B C D
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
Warning
Changed in version 1.0.0.
pandas will raise a KeyError if indexing with a list with missing labels. See list-like Using loc with
missing keys in a list is Deprecated.
pandas provides a suite of methods in order to have purely label based indexing. This is a strict inclusion based protocol.
Every label asked for must be in the index, or a KeyError will be raised.
When slicing, both the start bound AND the stop bound are included, if present in the index.
Integers are valid labels, but they refer to the label and not the position.
The .loc attribute is the primary access method. The following are valid inputs:
A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a label of the index. This use is not an integer position along the index.).
A list or array of labels ['a', 'b', 'c'].
A slice object with labels 'a':'f' (Note that contrary to usual Python
slices, both the start and the stop are included, when present in the
index! See Slicing with labels.
A boolean array.
A callable, see Selection By Callable.
In [38]: s1 = pd.Series(np.random.randn(6), index=list('abcdef'))
In [39]: s1
Out[39]:
a 1.431256
b 1.340309
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [40]: s1.loc['c':]
Out[40]:
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [41]: s1.loc['b']
Out[41]: 1.3403088497993827
Note that setting works as well:
In [42]: s1.loc['c':] = 0
In [43]: s1
Out[43]:
a 1.431256
b 1.340309
c 0.000000
d 0.000000
e 0.000000
f 0.000000
dtype: float64
With a DataFrame:
In [44]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list('abcdef'),
....: columns=list('ABCD'))
....:
In [45]: df1
Out[45]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
c 1.024180 0.569605 0.875906 -2.211372
d 0.974466 -2.006747 -0.410001 -0.078638
e 0.545952 -1.219217 -1.226825 0.769804
f -1.281247 -0.727707 -0.121306 -0.097883
In [46]: df1.loc[['a', 'b', 'd'], :]
Out[46]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
d 0.974466 -2.006747 -0.410001 -0.078638
Accessing via label slices:
In [47]: df1.loc['d':, 'A':'C']
Out[47]:
A B C
d 0.974466 -2.006747 -0.410001
e 0.545952 -1.219217 -1.226825
f -1.281247 -0.727707 -0.121306
For getting a cross section using a label (equivalent to df.xs('a')):
In [48]: df1.loc['a']
Out[48]:
A 0.132003
B -0.827317
C -0.076467
D -1.187678
Name: a, dtype: float64
For getting values with a boolean array:
In [49]: df1.loc['a'] > 0
Out[49]:
A True
B False
C False
D False
Name: a, dtype: bool
In [50]: df1.loc[:, df1.loc['a'] > 0]
Out[50]:
A
a 0.132003
b 1.130127
c 1.024180
d 0.974466
e 0.545952
f -1.281247
NA values in a boolean array propagate as False:
Changed in version 1.0.2.
In [51]: mask = pd.array([True, False, True, False, pd.NA, False], dtype="boolean")
In [52]: mask
Out[52]:
<BooleanArray>
[True, False, True, False, <NA>, False]
Length: 6, dtype: boolean
In [53]: df1[mask]
Out[53]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
c 1.024180 0.569605 0.875906 -2.211372
For getting a value explicitly:
# this is also equivalent to ``df1.at['a','A']``
In [54]: df1.loc['a', 'A']
Out[54]: 0.13200317033032932
Slicing with labels#
When using .loc with slices, if both the start and the stop labels are
present in the index, then elements located between the two (including them)
are returned:
In [55]: s = pd.Series(list('abcde'), index=[0, 3, 2, 5, 4])
In [56]: s.loc[3:5]
Out[56]:
3 b
2 c
5 d
dtype: object
If at least one of the two is absent, but the index is sorted, and can be
compared against start and stop labels, then slicing will still work as
expected, by selecting labels which rank between the two:
In [57]: s.sort_index()
Out[57]:
0 a
2 c
3 b
4 e
5 d
dtype: object
In [58]: s.sort_index().loc[1:6]
Out[58]:
2 c
3 b
4 e
5 d
dtype: object
However, if at least one of the two is absent and the index is not sorted, an
error will be raised (since doing otherwise would be computationally expensive,
as well as potentially ambiguous for mixed type indexes). For instance, in the
above example, s.loc[1:6] would raise KeyError.
For the rationale behind this behavior, see
Endpoints are inclusive.
In [59]: s = pd.Series(list('abcdef'), index=[0, 3, 2, 5, 4, 2])
In [60]: s.loc[3:5]
Out[60]:
3 b
2 c
5 d
dtype: object
Also, if the index has duplicate labels and either the start or the stop label is duplicated,
an error will be raised. For instance, in the above example, s.loc[2:5] would raise a KeyError.
For more information about duplicate labels, see
Duplicate Labels.
Selection by position#
Warning
Whether a copy or a reference is returned for a setting operation, may depend on the context.
This is sometimes called chained assignment and should be avoided.
See Returning a View versus Copy.
pandas provides a suite of methods in order to get purely integer based indexing. The semantics follow closely Python and NumPy slicing. These are 0-based indexing. When slicing, the start bound is included, while the upper bound is excluded. Trying to use a non-integer, even a valid label will raise an IndexError.
The .iloc attribute is the primary access method. The following are valid inputs:
An integer e.g. 5.
A list or array of integers [4, 3, 0].
A slice object with ints 1:7.
A boolean array.
A callable, see Selection By Callable.
In [61]: s1 = pd.Series(np.random.randn(5), index=list(range(0, 10, 2)))
In [62]: s1
Out[62]:
0 0.695775
2 0.341734
4 0.959726
6 -1.110336
8 -0.619976
dtype: float64
In [63]: s1.iloc[:3]
Out[63]:
0 0.695775
2 0.341734
4 0.959726
dtype: float64
In [64]: s1.iloc[3]
Out[64]: -1.110336102891167
Note that setting works as well:
In [65]: s1.iloc[:3] = 0
In [66]: s1
Out[66]:
0 0.000000
2 0.000000
4 0.000000
6 -1.110336
8 -0.619976
dtype: float64
With a DataFrame:
In [67]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list(range(0, 12, 2)),
....: columns=list(range(0, 8, 2)))
....:
In [68]: df1
Out[68]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
6 -0.826591 -0.345352 1.314232 0.690579
8 0.995761 2.396780 0.014871 3.357427
10 -0.317441 -1.236269 0.896171 -0.487602
Select via integer slicing:
In [69]: df1.iloc[:3]
Out[69]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
In [70]: df1.iloc[1:5, 2:4]
Out[70]:
4 6
2 0.301624 -2.179861
4 1.462696 -1.743161
6 1.314232 0.690579
8 0.014871 3.357427
Select via integer list:
In [71]: df1.iloc[[1, 3, 5], [1, 3]]
Out[71]:
2 6
2 -0.154951 -2.179861
6 -0.345352 0.690579
10 -1.236269 -0.487602
In [72]: df1.iloc[1:3, :]
Out[72]:
0 2 4 6
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
In [73]: df1.iloc[:, 1:3]
Out[73]:
2 4
0 -0.732339 0.687738
2 -0.154951 0.301624
4 -0.954208 1.462696
6 -0.345352 1.314232
8 2.396780 0.014871
10 -1.236269 0.896171
# this is also equivalent to ``df1.iat[1,1]``
In [74]: df1.iloc[1, 1]
Out[74]: -0.1549507744249032
For getting a cross section using an integer position (equiv to df.xs(1)):
In [75]: df1.iloc[1]
Out[75]:
0 0.403310
2 -0.154951
4 0.301624
6 -2.179861
Name: 2, dtype: float64
Out of range slice indexes are handled gracefully just as in Python/NumPy.
# these are allowed in Python/NumPy.
In [76]: x = list('abcdef')
In [77]: x
Out[77]: ['a', 'b', 'c', 'd', 'e', 'f']
In [78]: x[4:10]
Out[78]: ['e', 'f']
In [79]: x[8:10]
Out[79]: []
In [80]: s = pd.Series(x)
In [81]: s
Out[81]:
0 a
1 b
2 c
3 d
4 e
5 f
dtype: object
In [82]: s.iloc[4:10]
Out[82]:
4 e
5 f
dtype: object
In [83]: s.iloc[8:10]
Out[83]: Series([], dtype: object)
Note that using slices that go out of bounds can result in
an empty axis (e.g. an empty DataFrame being returned).
In [84]: dfl = pd.DataFrame(np.random.randn(5, 2), columns=list('AB'))
In [85]: dfl
Out[85]:
A B
0 -0.082240 -2.182937
1 0.380396 0.084844
2 0.432390 1.519970
3 -0.493662 0.600178
4 0.274230 0.132885
In [86]: dfl.iloc[:, 2:3]
Out[86]:
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4]
In [87]: dfl.iloc[:, 1:3]
Out[87]:
B
0 -2.182937
1 0.084844
2 1.519970
3 0.600178
4 0.132885
In [88]: dfl.iloc[4:6]
Out[88]:
A B
4 0.27423 0.132885
A single indexer that is out of bounds will raise an IndexError.
A list of indexers where any element is out of bounds will raise an
IndexError.
>>> dfl.iloc[[4, 5, 6]]
IndexError: positional indexers are out-of-bounds
>>> dfl.iloc[:, 4]
IndexError: single positional indexer is out-of-bounds
Selection by callable#
.loc, .iloc, and also [] indexing can accept a callable as indexer.
The callable must be a function with one argument (the calling Series or DataFrame) that returns valid output for indexing.
In [89]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list('abcdef'),
....: columns=list('ABCD'))
....:
In [90]: df1
Out[90]:
A B C D
a -0.023688 2.410179 1.450520 0.206053
b -0.251905 -2.213588 1.063327 1.266143
c 0.299368 -0.863838 0.408204 -1.048089
d -0.025747 -0.988387 0.094055 1.262731
e 1.289997 0.082423 -0.055758 0.536580
f -0.489682 0.369374 -0.034571 -2.484478
In [91]: df1.loc[lambda df: df['A'] > 0, :]
Out[91]:
A B C D
c 0.299368 -0.863838 0.408204 -1.048089
e 1.289997 0.082423 -0.055758 0.536580
In [92]: df1.loc[:, lambda df: ['A', 'B']]
Out[92]:
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
In [93]: df1.iloc[:, lambda df: [0, 1]]
Out[93]:
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
In [94]: df1[lambda df: df.columns[0]]
Out[94]:
a -0.023688
b -0.251905
c 0.299368
d -0.025747
e 1.289997
f -0.489682
Name: A, dtype: float64
You can use callable indexing in Series.
In [95]: df1['A'].loc[lambda s: s > 0]
Out[95]:
c 0.299368
e 1.289997
Name: A, dtype: float64
Using these methods / indexers, you can chain data selection operations
without using a temporary variable.
In [96]: bb = pd.read_csv('data/baseball.csv', index_col='id')
In [97]: (bb.groupby(['year', 'team']).sum(numeric_only=True)
....: .loc[lambda df: df['r'] > 100])
....:
Out[97]:
stint g ab r h X2b ... so ibb hbp sh sf gidp
year team ...
2007 CIN 6 379 745 101 203 35 ... 127.0 14.0 1.0 1.0 15.0 18.0
DET 5 301 1062 162 283 54 ... 176.0 3.0 10.0 4.0 8.0 28.0
HOU 4 311 926 109 218 47 ... 212.0 3.0 9.0 16.0 6.0 17.0
LAN 11 413 1021 153 293 61 ... 141.0 8.0 9.0 3.0 8.0 29.0
NYN 13 622 1854 240 509 101 ... 310.0 24.0 23.0 18.0 15.0 48.0
SFN 5 482 1305 198 337 67 ... 188.0 51.0 8.0 16.0 6.0 41.0
TEX 2 198 729 115 200 40 ... 140.0 4.0 5.0 2.0 8.0 16.0
TOR 4 459 1408 187 378 96 ... 265.0 16.0 12.0 4.0 16.0 38.0
[8 rows x 18 columns]
Combining positional and label-based indexing#
If you wish to get the 0th and the 2nd elements from the index in the ‘A’ column, you can do:
In [98]: dfd = pd.DataFrame({'A': [1, 2, 3],
....: 'B': [4, 5, 6]},
....: index=list('abc'))
....:
In [99]: dfd
Out[99]:
A B
a 1 4
b 2 5
c 3 6
In [100]: dfd.loc[dfd.index[[0, 2]], 'A']
Out[100]:
a 1
c 3
Name: A, dtype: int64
This can also be expressed using .iloc, by explicitly getting locations on the indexers, and using
positional indexing to select things.
In [101]: dfd.iloc[[0, 2], dfd.columns.get_loc('A')]
Out[101]:
a 1
c 3
Name: A, dtype: int64
For getting multiple indexers, using .get_indexer:
In [102]: dfd.iloc[[0, 2], dfd.columns.get_indexer(['A', 'B'])]
Out[102]:
A B
a 1 4
c 3 6
Indexing with list with missing labels is deprecated#
Warning
Changed in version 1.0.0.
Using .loc or [] with a list with one or more missing labels will no longer reindex, in favor of .reindex.
In prior versions, using .loc[list-of-labels] would work as long as at least 1 of the keys was found (otherwise it
would raise a KeyError). This behavior was changed and will now raise a KeyError if at least one label is missing.
The recommended alternative is to use .reindex().
For example.
In [103]: s = pd.Series([1, 2, 3])
In [104]: s
Out[104]:
0 1
1 2
2 3
dtype: int64
Selection with all keys found is unchanged.
In [105]: s.loc[[1, 2]]
Out[105]:
1 2
2 3
dtype: int64
Previous behavior
In [4]: s.loc[[1, 2, 3]]
Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64
Current behavior
In [4]: s.loc[[1, 2, 3]]
Passing list-likes to .loc with any non-matching elements will raise
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike
Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64
Reindexing#
The idiomatic way to achieve selecting potentially not-found elements is via .reindex(). See also the section on reindexing.
In [106]: s.reindex([1, 2, 3])
Out[106]:
1 2.0
2 3.0
3 NaN
dtype: float64
Alternatively, if you want to select only valid keys, the following is idiomatic and efficient; it is guaranteed to preserve the dtype of the selection.
In [107]: labels = [1, 2, 3]
In [108]: s.loc[s.index.intersection(labels)]
Out[108]:
1 2
2 3
dtype: int64
Having a duplicated index will raise for a .reindex():
In [109]: s = pd.Series(np.arange(4), index=['a', 'a', 'b', 'c'])
In [110]: labels = ['c', 'd']
In [17]: s.reindex(labels)
ValueError: cannot reindex on an axis with duplicate labels
Generally, you can intersect the desired labels with the current
axis, and then reindex.
In [111]: s.loc[s.index.intersection(labels)].reindex(labels)
Out[111]:
c 3.0
d NaN
dtype: float64
However, this would still raise if your resulting index is duplicated.
In [41]: labels = ['a', 'd']
In [42]: s.loc[s.index.intersection(labels)].reindex(labels)
ValueError: cannot reindex on an axis with duplicate labels
Selecting random samples#
A random selection of rows or columns from a Series or DataFrame with the sample() method. The method will sample rows by default, and accepts a specific number of rows/columns to return, or a fraction of rows.
In [112]: s = pd.Series([0, 1, 2, 3, 4, 5])
# When no arguments are passed, returns 1 row.
In [113]: s.sample()
Out[113]:
4 4
dtype: int64
# One may specify either a number of rows:
In [114]: s.sample(n=3)
Out[114]:
0 0
4 4
1 1
dtype: int64
# Or a fraction of the rows:
In [115]: s.sample(frac=0.5)
Out[115]:
5 5
3 3
1 1
dtype: int64
By default, sample will return each row at most once, but one can also sample with replacement
using the replace option:
In [116]: s = pd.Series([0, 1, 2, 3, 4, 5])
# Without replacement (default):
In [117]: s.sample(n=6, replace=False)
Out[117]:
0 0
1 1
5 5
3 3
2 2
4 4
dtype: int64
# With replacement:
In [118]: s.sample(n=6, replace=True)
Out[118]:
0 0
4 4
3 3
2 2
4 4
4 4
dtype: int64
By default, each row has an equal probability of being selected, but if you want rows
to have different probabilities, you can pass the sample function sampling weights as
weights. These weights can be a list, a NumPy array, or a Series, but they must be of the same length as the object you are sampling. Missing values will be treated as a weight of zero, and inf values are not allowed. If weights do not sum to 1, they will be re-normalized by dividing all weights by the sum of the weights. For example:
In [119]: s = pd.Series([0, 1, 2, 3, 4, 5])
In [120]: example_weights = [0, 0, 0.2, 0.2, 0.2, 0.4]
In [121]: s.sample(n=3, weights=example_weights)
Out[121]:
5 5
4 4
3 3
dtype: int64
# Weights will be re-normalized automatically
In [122]: example_weights2 = [0.5, 0, 0, 0, 0, 0]
In [123]: s.sample(n=1, weights=example_weights2)
Out[123]:
0 0
dtype: int64
When applied to a DataFrame, you can use a column of the DataFrame as sampling weights
(provided you are sampling rows and not columns) by simply passing the name of the column
as a string.
In [124]: df2 = pd.DataFrame({'col1': [9, 8, 7, 6],
.....: 'weight_column': [0.5, 0.4, 0.1, 0]})
.....:
In [125]: df2.sample(n=3, weights='weight_column')
Out[125]:
col1 weight_column
1 8 0.4
0 9 0.5
2 7 0.1
sample also allows users to sample columns instead of rows using the axis argument.
In [126]: df3 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})
In [127]: df3.sample(n=1, axis=1)
Out[127]:
col1
0 1
1 2
2 3
Finally, one can also set a seed for sample’s random number generator using the random_state argument, which will accept either an integer (as a seed) or a NumPy RandomState object.
In [128]: df4 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})
# With a given seed, the sample will always draw the same rows.
In [129]: df4.sample(n=2, random_state=2)
Out[129]:
col1 col2
2 3 4
1 2 3
In [130]: df4.sample(n=2, random_state=2)
Out[130]:
col1 col2
2 3 4
1 2 3
Setting with enlargement#
The .loc/[] operations can perform enlargement when setting a non-existent key for that axis.
In the Series case this is effectively an appending operation.
In [131]: se = pd.Series([1, 2, 3])
In [132]: se
Out[132]:
0 1
1 2
2 3
dtype: int64
In [133]: se[5] = 5.
In [134]: se
Out[134]:
0 1.0
1 2.0
2 3.0
5 5.0
dtype: float64
A DataFrame can be enlarged on either axis via .loc.
In [135]: dfi = pd.DataFrame(np.arange(6).reshape(3, 2),
.....: columns=['A', 'B'])
.....:
In [136]: dfi
Out[136]:
A B
0 0 1
1 2 3
2 4 5
In [137]: dfi.loc[:, 'C'] = dfi.loc[:, 'A']
In [138]: dfi
Out[138]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
This is like an append operation on the DataFrame.
In [139]: dfi.loc[3] = 5
In [140]: dfi
Out[140]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
3 5 5 5
Fast scalar value getting and setting#
Since indexing with [] must handle a lot of cases (single-label access,
slicing, boolean indexing, etc.), it has a bit of overhead in order to figure
out what you’re asking for. If you only want to access a scalar value, the
fastest way is to use the at and iat methods, which are implemented on
all of the data structures.
Similarly to loc, at provides label based scalar lookups, while, iat provides integer based lookups analogously to iloc
In [141]: s.iat[5]
Out[141]: 5
In [142]: df.at[dates[5], 'A']
Out[142]: -0.6736897080883706
In [143]: df.iat[3, 0]
Out[143]: 0.7215551622443669
You can also set using these same indexers.
In [144]: df.at[dates[5], 'E'] = 7
In [145]: df.iat[3, 0] = 7
at may enlarge the object in-place as above if the indexer is missing.
In [146]: df.at[dates[-1] + pd.Timedelta('1 day'), 0] = 7
In [147]: df
Out[147]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-05 -0.424972 0.567020 0.276232 -1.087401 NaN NaN
2000-01-06 -0.673690 0.113648 -1.478427 0.524988 7.0 NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885 NaN NaN
2000-01-09 NaN NaN NaN NaN NaN 7.0
Boolean indexing#
Another common operation is the use of boolean vectors to filter the data.
The operators are: | for or, & for and, and ~ for not.
These must be grouped by using parentheses, since by default Python will
evaluate an expression such as df['A'] > 2 & df['B'] < 3 as
df['A'] > (2 & df['B']) < 3, while the desired evaluation order is
(df['A'] > 2) & (df['B'] < 3).
Using a boolean vector to index a Series works exactly as in a NumPy ndarray:
In [148]: s = pd.Series(range(-3, 4))
In [149]: s
Out[149]:
0 -3
1 -2
2 -1
3 0
4 1
5 2
6 3
dtype: int64
In [150]: s[s > 0]
Out[150]:
4 1
5 2
6 3
dtype: int64
In [151]: s[(s < -1) | (s > 0.5)]
Out[151]:
0 -3
1 -2
4 1
5 2
6 3
dtype: int64
In [152]: s[~(s < 0)]
Out[152]:
3 0
4 1
5 2
6 3
dtype: int64
You may select rows from a DataFrame using a boolean vector the same length as
the DataFrame’s index (for example, something derived from one of the columns
of the DataFrame):
In [153]: df[df['A'] > 0]
Out[153]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN
List comprehensions and the map method of Series can also be used to produce
more complex criteria:
In [154]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
.....: 'b': ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
.....: 'c': np.random.randn(7)})
.....:
# only want 'two' or 'three'
In [155]: criterion = df2['a'].map(lambda x: x.startswith('t'))
In [156]: df2[criterion]
Out[156]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
# equivalent but slower
In [157]: df2[[x.startswith('t') for x in df2['a']]]
Out[157]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
# Multiple criteria
In [158]: df2[criterion & (df2['b'] == 'x')]
Out[158]:
a b c
3 three x 0.361719
With the choice methods Selection by Label, Selection by Position,
and Advanced Indexing you may select along more than one axis using boolean vectors combined with other indexing expressions.
In [159]: df2.loc[criterion & (df2['b'] == 'x'), 'b':'c']
Out[159]:
b c
3 x 0.361719
Warning
iloc supports two kinds of boolean indexing. If the indexer is a boolean Series,
an error will be raised. For instance, in the following example, df.iloc[s.values, 1] is ok.
The boolean indexer is an array. But df.iloc[s, 1] would raise ValueError.
In [160]: df = pd.DataFrame([[1, 2], [3, 4], [5, 6]],
.....: index=list('abc'),
.....: columns=['A', 'B'])
.....:
In [161]: s = (df['A'] > 2)
In [162]: s
Out[162]:
a False
b True
c True
Name: A, dtype: bool
In [163]: df.loc[s, 'B']
Out[163]:
b 4
c 6
Name: B, dtype: int64
In [164]: df.iloc[s.values, 1]
Out[164]:
b 4
c 6
Name: B, dtype: int64
Indexing with isin#
Consider the isin() method of Series, which returns a boolean
vector that is true wherever the Series elements exist in the passed list.
This allows you to select rows where one or more columns have values you want:
In [165]: s = pd.Series(np.arange(5), index=np.arange(5)[::-1], dtype='int64')
In [166]: s
Out[166]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
In [167]: s.isin([2, 4, 6])
Out[167]:
4 False
3 False
2 True
1 False
0 True
dtype: bool
In [168]: s[s.isin([2, 4, 6])]
Out[168]:
2 2
0 4
dtype: int64
The same method is available for Index objects and is useful for the cases
when you don’t know which of the sought labels are in fact present:
In [169]: s[s.index.isin([2, 4, 6])]
Out[169]:
4 0
2 2
dtype: int64
# compare it to the following
In [170]: s.reindex([2, 4, 6])
Out[170]:
2 2.0
4 0.0
6 NaN
dtype: float64
In addition to that, MultiIndex allows selecting a separate level to use
in the membership check:
In [171]: s_mi = pd.Series(np.arange(6),
.....: index=pd.MultiIndex.from_product([[0, 1], ['a', 'b', 'c']]))
.....:
In [172]: s_mi
Out[172]:
0 a 0
b 1
c 2
1 a 3
b 4
c 5
dtype: int64
In [173]: s_mi.iloc[s_mi.index.isin([(1, 'a'), (2, 'b'), (0, 'c')])]
Out[173]:
0 c 2
1 a 3
dtype: int64
In [174]: s_mi.iloc[s_mi.index.isin(['a', 'c', 'e'], level=1)]
Out[174]:
0 a 0
c 2
1 a 3
c 5
dtype: int64
DataFrame also has an isin() method. When calling isin, pass a set of
values as either an array or dict. If values is an array, isin returns
a DataFrame of booleans that is the same shape as the original DataFrame, with True
wherever the element is in the sequence of values.
In [175]: df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'],
.....: 'ids2': ['a', 'n', 'c', 'n']})
.....:
In [176]: values = ['a', 'b', 1, 3]
In [177]: df.isin(values)
Out[177]:
vals ids ids2
0 True True True
1 False True False
2 True False False
3 False False False
Oftentimes you’ll want to match certain values with certain columns.
Just make values a dict where the key is the column, and the value is
a list of items you want to check for.
In [178]: values = {'ids': ['a', 'b'], 'vals': [1, 3]}
In [179]: df.isin(values)
Out[179]:
vals ids ids2
0 True True False
1 False True False
2 True False False
3 False False False
To return the DataFrame of booleans where the values are not in the original DataFrame,
use the ~ operator:
In [180]: values = {'ids': ['a', 'b'], 'vals': [1, 3]}
In [181]: ~df.isin(values)
Out[181]:
vals ids ids2
0 False False True
1 True False True
2 False True True
3 True True True
Combine DataFrame’s isin with the any() and all() methods to
quickly select subsets of your data that meet a given criteria.
To select a row where each column meets its own criterion:
In [182]: values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
In [183]: row_mask = df.isin(values).all(1)
In [184]: df[row_mask]
Out[184]:
vals ids ids2
0 1 a a
The where() Method and Masking#
Selecting values from a Series with a boolean vector generally returns a
subset of the data. To guarantee that selection output has the same shape as
the original data, you can use the where method in Series and DataFrame.
To return only the selected rows:
In [185]: s[s > 0]
Out[185]:
3 1
2 2
1 3
0 4
dtype: int64
To return a Series of the same shape as the original:
In [186]: s.where(s > 0)
Out[186]:
4 NaN
3 1.0
2 2.0
1 3.0
0 4.0
dtype: float64
Selecting values from a DataFrame with a boolean criterion now also preserves
input data shape. where is used under the hood as the implementation.
The code below is equivalent to df.where(df < 0).
In [187]: df[df < 0]
Out[187]:
A B C D
2000-01-01 -2.104139 -1.309525 NaN NaN
2000-01-02 -0.352480 NaN -1.192319 NaN
2000-01-03 -0.864883 NaN -0.227870 NaN
2000-01-04 NaN -1.222082 NaN -1.233203
2000-01-05 NaN -0.605656 -1.169184 NaN
2000-01-06 NaN -0.948458 NaN -0.684718
2000-01-07 -2.670153 -0.114722 NaN -0.048048
2000-01-08 NaN NaN -0.048788 -0.808838
In addition, where takes an optional other argument for replacement of
values where the condition is False, in the returned copy.
In [188]: df.where(df < 0, -df)
Out[188]:
A B C D
2000-01-01 -2.104139 -1.309525 -0.485855 -0.245166
2000-01-02 -0.352480 -0.390389 -1.192319 -1.655824
2000-01-03 -0.864883 -0.299674 -0.227870 -0.281059
2000-01-04 -0.846958 -1.222082 -0.600705 -1.233203
2000-01-05 -0.669692 -0.605656 -1.169184 -0.342416
2000-01-06 -0.868584 -0.948458 -2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 -0.168904 -0.048048
2000-01-08 -0.801196 -1.392071 -0.048788 -0.808838
You may wish to set values based on some boolean criteria.
This can be done intuitively like so:
In [189]: s2 = s.copy()
In [190]: s2[s2 < 0] = 0
In [191]: s2
Out[191]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
In [192]: df2 = df.copy()
In [193]: df2[df2 < 0] = 0
In [194]: df2
Out[194]:
A B C D
2000-01-01 0.000000 0.000000 0.485855 0.245166
2000-01-02 0.000000 0.390389 0.000000 1.655824
2000-01-03 0.000000 0.299674 0.000000 0.281059
2000-01-04 0.846958 0.000000 0.600705 0.000000
2000-01-05 0.669692 0.000000 0.000000 0.342416
2000-01-06 0.868584 0.000000 2.297780 0.000000
2000-01-07 0.000000 0.000000 0.168904 0.000000
2000-01-08 0.801196 1.392071 0.000000 0.000000
By default, where returns a modified copy of the data. There is an
optional parameter inplace so that the original data can be modified
without creating a copy:
In [195]: df_orig = df.copy()
In [196]: df_orig.where(df > 0, -df, inplace=True)
In [197]: df_orig
Out[197]:
A B C D
2000-01-01 2.104139 1.309525 0.485855 0.245166
2000-01-02 0.352480 0.390389 1.192319 1.655824
2000-01-03 0.864883 0.299674 0.227870 0.281059
2000-01-04 0.846958 1.222082 0.600705 1.233203
2000-01-05 0.669692 0.605656 1.169184 0.342416
2000-01-06 0.868584 0.948458 2.297780 0.684718
2000-01-07 2.670153 0.114722 0.168904 0.048048
2000-01-08 0.801196 1.392071 0.048788 0.808838
Note
The signature for DataFrame.where() differs from numpy.where().
Roughly df1.where(m, df2) is equivalent to np.where(m, df1, df2).
In [198]: df.where(df < 0, -df) == np.where(df < 0, df, -df)
Out[198]:
A B C D
2000-01-01 True True True True
2000-01-02 True True True True
2000-01-03 True True True True
2000-01-04 True True True True
2000-01-05 True True True True
2000-01-06 True True True True
2000-01-07 True True True True
2000-01-08 True True True True
Alignment
Furthermore, where aligns the input boolean condition (ndarray or DataFrame),
such that partial selection with setting is possible. This is analogous to
partial setting via .loc (but on the contents rather than the axis labels).
In [199]: df2 = df.copy()
In [200]: df2[df2[1:4] > 0] = 3
In [201]: df2
Out[201]:
A B C D
2000-01-01 -2.104139 -1.309525 0.485855 0.245166
2000-01-02 -0.352480 3.000000 -1.192319 3.000000
2000-01-03 -0.864883 3.000000 -0.227870 3.000000
2000-01-04 3.000000 -1.222082 3.000000 -1.233203
2000-01-05 0.669692 -0.605656 -1.169184 0.342416
2000-01-06 0.868584 -0.948458 2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 0.168904 -0.048048
2000-01-08 0.801196 1.392071 -0.048788 -0.808838
Where can also accept axis and level parameters to align the input when
performing the where.
In [202]: df2 = df.copy()
In [203]: df2.where(df2 > 0, df2['A'], axis='index')
Out[203]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196
This is equivalent to (but faster than) the following.
In [204]: df2 = df.copy()
In [205]: df.apply(lambda x, y: x.where(x > 0, y), y=df['A'])
Out[205]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196
where can accept a callable as condition and other arguments. The function must
be with one argument (the calling Series or DataFrame) and that returns valid output
as condition and other argument.
In [206]: df3 = pd.DataFrame({'A': [1, 2, 3],
.....: 'B': [4, 5, 6],
.....: 'C': [7, 8, 9]})
.....:
In [207]: df3.where(lambda x: x > 4, lambda x: x + 10)
Out[207]:
A B C
0 11 14 7
1 12 5 8
2 13 6 9
Mask#
mask() is the inverse boolean operation of where.
In [208]: s.mask(s >= 0)
Out[208]:
4 NaN
3 NaN
2 NaN
1 NaN
0 NaN
dtype: float64
In [209]: df.mask(df >= 0)
Out[209]:
A B C D
2000-01-01 -2.104139 -1.309525 NaN NaN
2000-01-02 -0.352480 NaN -1.192319 NaN
2000-01-03 -0.864883 NaN -0.227870 NaN
2000-01-04 NaN -1.222082 NaN -1.233203
2000-01-05 NaN -0.605656 -1.169184 NaN
2000-01-06 NaN -0.948458 NaN -0.684718
2000-01-07 -2.670153 -0.114722 NaN -0.048048
2000-01-08 NaN NaN -0.048788 -0.808838
Setting with enlargement conditionally using numpy()#
An alternative to where() is to use numpy.where().
Combined with setting a new column, you can use it to enlarge a DataFrame where the
values are determined conditionally.
Consider you have two choices to choose from in the following DataFrame. And you want to
set a new column color to ‘green’ when the second column has ‘Z’. You can do the
following:
In [210]: df = pd.DataFrame({'col1': list('ABBC'), 'col2': list('ZZXY')})
In [211]: df['color'] = np.where(df['col2'] == 'Z', 'green', 'red')
In [212]: df
Out[212]:
col1 col2 color
0 A Z green
1 B Z green
2 B X red
3 C Y red
If you have multiple conditions, you can use numpy.select() to achieve that. Say
corresponding to three conditions there are three choice of colors, with a fourth color
as a fallback, you can do the following.
In [213]: conditions = [
.....: (df['col2'] == 'Z') & (df['col1'] == 'A'),
.....: (df['col2'] == 'Z') & (df['col1'] == 'B'),
.....: (df['col1'] == 'B')
.....: ]
.....:
In [214]: choices = ['yellow', 'blue', 'purple']
In [215]: df['color'] = np.select(conditions, choices, default='black')
In [216]: df
Out[216]:
col1 col2 color
0 A Z yellow
1 B Z blue
2 B X purple
3 C Y black
The query() Method#
DataFrame objects have a query()
method that allows selection using an expression.
You can get the value of the frame where column b has values
between the values of columns a and c. For example:
In [217]: n = 10
In [218]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [219]: df
Out[219]:
a b c
0 0.438921 0.118680 0.863670
1 0.138138 0.577363 0.686602
2 0.595307 0.564592 0.520630
3 0.913052 0.926075 0.616184
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
6 0.792342 0.216974 0.564056
7 0.397890 0.454131 0.915716
8 0.074315 0.437913 0.019794
9 0.559209 0.502065 0.026437
# pure python
In [220]: df[(df['a'] < df['b']) & (df['b'] < df['c'])]
Out[220]:
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
# query
In [221]: df.query('(a < b) & (b < c)')
Out[221]:
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
Do the same thing but fall back on a named index if there is no column
with the name a.
In [222]: df = pd.DataFrame(np.random.randint(n / 2, size=(n, 2)), columns=list('bc'))
In [223]: df.index.name = 'a'
In [224]: df
Out[224]:
b c
a
0 0 4
1 0 1
2 3 4
3 4 3
4 1 4
5 0 3
6 0 1
7 3 4
8 2 3
9 1 1
In [225]: df.query('a < b and b < c')
Out[225]:
b c
a
2 3 4
If instead you don’t want to or cannot name your index, you can use the name
index in your query expression:
In [226]: df = pd.DataFrame(np.random.randint(n, size=(n, 2)), columns=list('bc'))
In [227]: df
Out[227]:
b c
0 3 1
1 3 0
2 5 6
3 5 2
4 7 4
5 0 1
6 2 5
7 0 1
8 6 0
9 7 9
In [228]: df.query('index < b < c')
Out[228]:
b c
2 5 6
Note
If the name of your index overlaps with a column name, the column name is
given precedence. For example,
In [229]: df = pd.DataFrame({'a': np.random.randint(5, size=5)})
In [230]: df.index.name = 'a'
In [231]: df.query('a > 2') # uses the column 'a', not the index
Out[231]:
a
a
1 3
3 3
You can still use the index in a query expression by using the special
identifier ‘index’:
In [232]: df.query('index > 2')
Out[232]:
a
a
3 3
4 2
If for some reason you have a column named index, then you can refer to
the index as ilevel_0 as well, but at this point you should consider
renaming your columns to something less ambiguous.
MultiIndex query() Syntax#
You can also use the levels of a DataFrame with a
MultiIndex as if they were columns in the frame:
In [233]: n = 10
In [234]: colors = np.random.choice(['red', 'green'], size=n)
In [235]: foods = np.random.choice(['eggs', 'ham'], size=n)
In [236]: colors
Out[236]:
array(['red', 'red', 'red', 'green', 'green', 'green', 'green', 'green',
'green', 'green'], dtype='<U5')
In [237]: foods
Out[237]:
array(['ham', 'ham', 'eggs', 'eggs', 'eggs', 'ham', 'ham', 'eggs', 'eggs',
'eggs'], dtype='<U4')
In [238]: index = pd.MultiIndex.from_arrays([colors, foods], names=['color', 'food'])
In [239]: df = pd.DataFrame(np.random.randn(n, 2), index=index)
In [240]: df
Out[240]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
In [241]: df.query('color == "red"')
Out[241]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
If the levels of the MultiIndex are unnamed, you can refer to them using
special names:
In [242]: df.index.names = [None, None]
In [243]: df
Out[243]:
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
In [244]: df.query('ilevel_0 == "red"')
Out[244]:
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
The convention is ilevel_0, which means “index level 0” for the 0th level
of the index.
query() Use Cases#
A use case for query() is when you have a collection of
DataFrame objects that have a subset of column names (or index
levels/names) in common. You can pass the same query to both frames without
having to specify which frame you’re interested in querying
In [245]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [246]: df
Out[246]:
a b c
0 0.224283 0.736107 0.139168
1 0.302827 0.657803 0.713897
2 0.611185 0.136624 0.984960
3 0.195246 0.123436 0.627712
4 0.618673 0.371660 0.047902
5 0.480088 0.062993 0.185760
6 0.568018 0.483467 0.445289
7 0.309040 0.274580 0.587101
8 0.258993 0.477769 0.370255
9 0.550459 0.840870 0.304611
In [247]: df2 = pd.DataFrame(np.random.rand(n + 2, 3), columns=df.columns)
In [248]: df2
Out[248]:
a b c
0 0.357579 0.229800 0.596001
1 0.309059 0.957923 0.965663
2 0.123102 0.336914 0.318616
3 0.526506 0.323321 0.860813
4 0.518736 0.486514 0.384724
5 0.190804 0.505723 0.614533
6 0.891939 0.623977 0.676639
7 0.480559 0.378528 0.460858
8 0.420223 0.136404 0.141295
9 0.732206 0.419540 0.604675
10 0.604466 0.848974 0.896165
11 0.589168 0.920046 0.732716
In [249]: expr = '0.0 <= a <= c <= 0.5'
In [250]: map(lambda frame: frame.query(expr), [df, df2])
Out[250]: <map at 0x7f1ea0d8e580>
query() Python versus pandas Syntax Comparison#
Full numpy-like syntax:
In [251]: df = pd.DataFrame(np.random.randint(n, size=(n, 3)), columns=list('abc'))
In [252]: df
Out[252]:
a b c
0 7 8 9
1 1 0 7
2 2 7 2
3 6 2 2
4 2 6 3
5 3 8 2
6 1 7 2
7 5 1 5
8 9 8 0
9 1 5 0
In [253]: df.query('(a < b) & (b < c)')
Out[253]:
a b c
0 7 8 9
In [254]: df[(df['a'] < df['b']) & (df['b'] < df['c'])]
Out[254]:
a b c
0 7 8 9
Slightly nicer by removing the parentheses (comparison operators bind tighter
than & and |):
In [255]: df.query('a < b & b < c')
Out[255]:
a b c
0 7 8 9
Use English instead of symbols:
In [256]: df.query('a < b and b < c')
Out[256]:
a b c
0 7 8 9
Pretty close to how you might write it on paper:
In [257]: df.query('a < b < c')
Out[257]:
a b c
0 7 8 9
The in and not in operators#
query() also supports special use of Python’s in and
not in comparison operators, providing a succinct syntax for calling the
isin method of a Series or DataFrame.
# get all rows where columns "a" and "b" have overlapping values
In [258]: df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
.....: 'c': np.random.randint(5, size=12),
.....: 'd': np.random.randint(9, size=12)})
.....:
In [259]: df
Out[259]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
In [260]: df.query('a in b')
Out[260]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
# How you'd do it in pure Python
In [261]: df[df['a'].isin(df['b'])]
Out[261]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
In [262]: df.query('a not in b')
Out[262]:
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [263]: df[~df['a'].isin(df['b'])]
Out[263]:
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
You can combine this with other expressions for very succinct queries:
# rows where cols a and b have overlapping values
# and col c's values are less than col d's
In [264]: df.query('a in b and c < d')
Out[264]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
4 c b 3 6
5 c b 0 2
# pure Python
In [265]: df[df['b'].isin(df['a']) & (df['c'] < df['d'])]
Out[265]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
4 c b 3 6
5 c b 0 2
10 f c 0 6
11 f c 1 2
Note
Note that in and not in are evaluated in Python, since numexpr
has no equivalent of this operation. However, only the in/not in
expression itself is evaluated in vanilla Python. For example, in the
expression
df.query('a in b + c + d')
(b + c + d) is evaluated by numexpr and then the in
operation is evaluated in plain Python. In general, any operations that can
be evaluated using numexpr will be.
Special use of the == operator with list objects#
Comparing a list of values to a column using ==/!= works similarly
to in/not in.
In [266]: df.query('b == ["a", "b", "c"]')
Out[266]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [267]: df[df['b'].isin(["a", "b", "c"])]
Out[267]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
In [268]: df.query('c == [1, 2]')
Out[268]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
In [269]: df.query('c != [1, 2]')
Out[269]:
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# using in/not in
In [270]: df.query('[1, 2] in c')
Out[270]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
In [271]: df.query('[1, 2] not in c')
Out[271]:
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# pure Python
In [272]: df[df['c'].isin([1, 2])]
Out[272]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
Boolean operators#
You can negate boolean expressions with the word not or the ~ operator.
In [273]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [274]: df['bools'] = np.random.rand(len(df)) > 0.5
In [275]: df.query('~bools')
Out[275]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
In [276]: df.query('not bools')
Out[276]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
In [277]: df.query('not bools') == df[~df['bools']]
Out[277]:
a b c bools
2 True True True True
7 True True True True
8 True True True True
Of course, expressions can be arbitrarily complex too:
# short query syntax
In [278]: shorter = df.query('a < b < c and (not bools) or bools > 2')
# equivalent in pure Python
In [279]: longer = df[(df['a'] < df['b'])
.....: & (df['b'] < df['c'])
.....: & (~df['bools'])
.....: | (df['bools'] > 2)]
.....:
In [280]: shorter
Out[280]:
a b c bools
7 0.275396 0.691034 0.826619 False
In [281]: longer
Out[281]:
a b c bools
7 0.275396 0.691034 0.826619 False
In [282]: shorter == longer
Out[282]:
a b c bools
7 True True True True
Performance of query()#
DataFrame.query() using numexpr is slightly faster than Python for
large frames.
Note
You will only see the performance benefits of using the numexpr engine
with DataFrame.query() if your frame has more than approximately 200,000
rows.
This plot was created using a DataFrame with 3 columns each containing
floating point values generated using numpy.random.randn().
Duplicate data#
If you want to identify and remove duplicate rows in a DataFrame, there are
two methods that will help: duplicated and drop_duplicates. Each
takes as an argument the columns to use to identify duplicated rows.
duplicated returns a boolean vector whose length is the number of rows, and which indicates whether a row is duplicated.
drop_duplicates removes duplicate rows.
By default, the first observed row of a duplicate set is considered unique, but
each method has a keep parameter to specify targets to be kept.
keep='first' (default): mark / drop duplicates except for the first occurrence.
keep='last': mark / drop duplicates except for the last occurrence.
keep=False: mark / drop all duplicates.
In [283]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four'],
.....: 'b': ['x', 'y', 'x', 'y', 'x', 'x', 'x'],
.....: 'c': np.random.randn(7)})
.....:
In [284]: df2
Out[284]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
In [285]: df2.duplicated('a')
Out[285]:
0 False
1 True
2 False
3 True
4 True
5 False
6 False
dtype: bool
In [286]: df2.duplicated('a', keep='last')
Out[286]:
0 True
1 False
2 True
3 True
4 False
5 False
6 False
dtype: bool
In [287]: df2.duplicated('a', keep=False)
Out[287]:
0 True
1 True
2 True
3 True
4 True
5 False
6 False
dtype: bool
In [288]: df2.drop_duplicates('a')
Out[288]:
a b c
0 one x -1.067137
2 two x -0.211056
5 three x -1.964475
6 four x 1.298329
In [289]: df2.drop_duplicates('a', keep='last')
Out[289]:
a b c
1 one y 0.309500
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
In [290]: df2.drop_duplicates('a', keep=False)
Out[290]:
a b c
5 three x -1.964475
6 four x 1.298329
Also, you can pass a list of columns to identify duplications.
In [291]: df2.duplicated(['a', 'b'])
Out[291]:
0 False
1 False
2 False
3 False
4 True
5 False
6 False
dtype: bool
In [292]: df2.drop_duplicates(['a', 'b'])
Out[292]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
5 three x -1.964475
6 four x 1.298329
To drop duplicates by index value, use Index.duplicated then perform slicing.
The same set of options are available for the keep parameter.
In [293]: df3 = pd.DataFrame({'a': np.arange(6),
.....: 'b': np.random.randn(6)},
.....: index=['a', 'a', 'b', 'c', 'b', 'a'])
.....:
In [294]: df3
Out[294]:
a b
a 0 1.440455
a 1 2.456086
b 2 1.038402
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [295]: df3.index.duplicated()
Out[295]: array([False, True, False, False, True, True])
In [296]: df3[~df3.index.duplicated()]
Out[296]:
a b
a 0 1.440455
b 2 1.038402
c 3 -0.894409
In [297]: df3[~df3.index.duplicated(keep='last')]
Out[297]:
a b
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [298]: df3[~df3.index.duplicated(keep=False)]
Out[298]:
a b
c 3 -0.894409
Dictionary-like get() method#
Each of Series or DataFrame have a get method which can return a
default value.
In [299]: s = pd.Series([1, 2, 3], index=['a', 'b', 'c'])
In [300]: s.get('a') # equivalent to s['a']
Out[300]: 1
In [301]: s.get('x', default=-1)
Out[301]: -1
Looking up values by index/column labels#
Sometimes you want to extract a set of values given a sequence of row labels
and column labels, this can be achieved by pandas.factorize and NumPy indexing.
For instance:
In [302]: df = pd.DataFrame({'col': ["A", "A", "B", "B"],
.....: 'A': [80, 23, np.nan, 22],
.....: 'B': [80, 55, 76, 67]})
.....:
In [303]: df
Out[303]:
col A B
0 A 80.0 80
1 A 23.0 55
2 B NaN 76
3 B 22.0 67
In [304]: idx, cols = pd.factorize(df['col'])
In [305]: df.reindex(cols, axis=1).to_numpy()[np.arange(len(df)), idx]
Out[305]: array([80., 23., 76., 67.])
Formerly this could be achieved with the dedicated DataFrame.lookup method
which was deprecated in version 1.2.0.
Index objects#
The pandas Index class and its subclasses can be viewed as
implementing an ordered multiset. Duplicates are allowed. However, if you try
to convert an Index object with duplicate entries into a
set, an exception will be raised.
Index also provides the infrastructure necessary for
lookups, data alignment, and reindexing. The easiest way to create an
Index directly is to pass a list or other sequence to
Index:
In [306]: index = pd.Index(['e', 'd', 'a', 'b'])
In [307]: index
Out[307]: Index(['e', 'd', 'a', 'b'], dtype='object')
In [308]: 'd' in index
Out[308]: True
You can also pass a name to be stored in the index:
In [309]: index = pd.Index(['e', 'd', 'a', 'b'], name='something')
In [310]: index.name
Out[310]: 'something'
The name, if set, will be shown in the console display:
In [311]: index = pd.Index(list(range(5)), name='rows')
In [312]: columns = pd.Index(['A', 'B', 'C'], name='cols')
In [313]: df = pd.DataFrame(np.random.randn(5, 3), index=index, columns=columns)
In [314]: df
Out[314]:
cols A B C
rows
0 1.295989 -1.051694 1.340429
1 -2.366110 0.428241 0.387275
2 0.433306 0.929548 0.278094
3 2.154730 -0.315628 0.264223
4 1.126818 1.132290 -0.353310
In [315]: df['A']
Out[315]:
rows
0 1.295989
1 -2.366110
2 0.433306
3 2.154730
4 1.126818
Name: A, dtype: float64
Setting metadata#
Indexes are “mostly immutable”, but it is possible to set and change their
name attribute. You can use the rename, set_names to set these attributes
directly, and they default to returning a copy.
See Advanced Indexing for usage of MultiIndexes.
In [316]: ind = pd.Index([1, 2, 3])
In [317]: ind.rename("apple")
Out[317]: Int64Index([1, 2, 3], dtype='int64', name='apple')
In [318]: ind
Out[318]: Int64Index([1, 2, 3], dtype='int64')
In [319]: ind.set_names(["apple"], inplace=True)
In [320]: ind.name = "bob"
In [321]: ind
Out[321]: Int64Index([1, 2, 3], dtype='int64', name='bob')
set_names, set_levels, and set_codes also take an optional
level argument
In [322]: index = pd.MultiIndex.from_product([range(3), ['one', 'two']], names=['first', 'second'])
In [323]: index
Out[323]:
MultiIndex([(0, 'one'),
(0, 'two'),
(1, 'one'),
(1, 'two'),
(2, 'one'),
(2, 'two')],
names=['first', 'second'])
In [324]: index.levels[1]
Out[324]: Index(['one', 'two'], dtype='object', name='second')
In [325]: index.set_levels(["a", "b"], level=1)
Out[325]:
MultiIndex([(0, 'a'),
(0, 'b'),
(1, 'a'),
(1, 'b'),
(2, 'a'),
(2, 'b')],
names=['first', 'second'])
Set operations on Index objects#
The two main operations are union and intersection.
Difference is provided via the .difference() method.
In [326]: a = pd.Index(['c', 'b', 'a'])
In [327]: b = pd.Index(['c', 'e', 'd'])
In [328]: a.difference(b)
Out[328]: Index(['a', 'b'], dtype='object')
Also available is the symmetric_difference operation, which returns elements
that appear in either idx1 or idx2, but not in both. This is
equivalent to the Index created by idx1.difference(idx2).union(idx2.difference(idx1)),
with duplicates dropped.
In [329]: idx1 = pd.Index([1, 2, 3, 4])
In [330]: idx2 = pd.Index([2, 3, 4, 5])
In [331]: idx1.symmetric_difference(idx2)
Out[331]: Int64Index([1, 5], dtype='int64')
Note
The resulting index from a set operation will be sorted in ascending order.
When performing Index.union() between indexes with different dtypes, the indexes
must be cast to a common dtype. Typically, though not always, this is object dtype. The
exception is when performing a union between integer and float data. In this case, the
integer values are converted to float
In [332]: idx1 = pd.Index([0, 1, 2])
In [333]: idx2 = pd.Index([0.5, 1.5])
In [334]: idx1.union(idx2)
Out[334]: Float64Index([0.0, 0.5, 1.0, 1.5, 2.0], dtype='float64')
Missing values#
Important
Even though Index can hold missing values (NaN), it should be avoided
if you do not want any unexpected results. For example, some operations
exclude missing values implicitly.
Index.fillna fills missing values with specified scalar value.
In [335]: idx1 = pd.Index([1, np.nan, 3, 4])
In [336]: idx1
Out[336]: Float64Index([1.0, nan, 3.0, 4.0], dtype='float64')
In [337]: idx1.fillna(2)
Out[337]: Float64Index([1.0, 2.0, 3.0, 4.0], dtype='float64')
In [338]: idx2 = pd.DatetimeIndex([pd.Timestamp('2011-01-01'),
.....: pd.NaT,
.....: pd.Timestamp('2011-01-03')])
.....:
In [339]: idx2
Out[339]: DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03'], dtype='datetime64[ns]', freq=None)
In [340]: idx2.fillna(pd.Timestamp('2011-01-02'))
Out[340]: DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], dtype='datetime64[ns]', freq=None)
Set / reset index#
Occasionally you will load or create a data set into a DataFrame and want to
add an index after you’ve already done so. There are a couple of different
ways.
Set an index#
DataFrame has a set_index() method which takes a column name
(for a regular Index) or a list of column names (for a MultiIndex).
To create a new, re-indexed DataFrame:
In [341]: data
Out[341]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
In [342]: indexed1 = data.set_index('c')
In [343]: indexed1
Out[343]:
a b d
c
z bar one 1.0
y bar two 2.0
x foo one 3.0
w foo two 4.0
In [344]: indexed2 = data.set_index(['a', 'b'])
In [345]: indexed2
Out[345]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
The append keyword option allow you to keep the existing index and append
the given columns to a MultiIndex:
In [346]: frame = data.set_index('c', drop=False)
In [347]: frame = frame.set_index(['a', 'b'], append=True)
In [348]: frame
Out[348]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
Other options in set_index allow you not drop the index columns or to add
the index in-place (without creating a new object):
In [349]: data.set_index('c', drop=False)
Out[349]:
a b c d
c
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [350]: data.set_index(['a', 'b'], inplace=True)
In [351]: data
Out[351]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
Reset the index#
As a convenience, there is a new function on DataFrame called
reset_index() which transfers the index values into the
DataFrame’s columns and sets a simple integer index.
This is the inverse operation of set_index().
In [352]: data
Out[352]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
In [353]: data.reset_index()
Out[353]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
The output is more similar to a SQL table or a record array. The names for the
columns derived from the index are the ones stored in the names attribute.
You can use the level keyword to remove only a portion of the index:
In [354]: frame
Out[354]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [355]: frame.reset_index(level=1)
Out[355]:
a c d
c b
z one bar z 1.0
y two bar y 2.0
x one foo x 3.0
w two foo w 4.0
reset_index takes an optional parameter drop which if true simply
discards the index, instead of putting index values in the DataFrame’s columns.
Adding an ad hoc index#
If you create an index yourself, you can just assign it to the index field:
data.index = index
Returning a view versus a copy#
When setting values in a pandas object, care must be taken to avoid what is called
chained indexing. Here is an example.
In [356]: dfmi = pd.DataFrame([list('abcd'),
.....: list('efgh'),
.....: list('ijkl'),
.....: list('mnop')],
.....: columns=pd.MultiIndex.from_product([['one', 'two'],
.....: ['first', 'second']]))
.....:
In [357]: dfmi
Out[357]:
one two
first second first second
0 a b c d
1 e f g h
2 i j k l
3 m n o p
Compare these two access methods:
In [358]: dfmi['one']['second']
Out[358]:
0 b
1 f
2 j
3 n
Name: second, dtype: object
In [359]: dfmi.loc[:, ('one', 'second')]
Out[359]:
0 b
1 f
2 j
3 n
Name: (one, second), dtype: object
These both yield the same results, so which should you use? It is instructive to understand the order
of operations on these and why method 2 (.loc) is much preferred over method 1 (chained []).
dfmi['one'] selects the first level of the columns and returns a DataFrame that is singly-indexed.
Then another Python operation dfmi_with_one['second'] selects the series indexed by 'second'.
This is indicated by the variable dfmi_with_one because pandas sees these operations as separate events.
e.g. separate calls to __getitem__, so it has to treat them as linear operations, they happen one after another.
Contrast this to df.loc[:,('one','second')] which passes a nested tuple of (slice(None),('one','second')) to a single call to
__getitem__. This allows pandas to deal with this as a single entity. Furthermore this order of operations can be significantly
faster, and allows one to index both axes if so desired.
Why does assignment fail when using chained indexing?#
The problem in the previous section is just a performance issue. What’s up with
the SettingWithCopy warning? We don’t usually throw warnings around when
you do something that might cost a few extra milliseconds!
But it turns out that assigning to the product of chained indexing has
inherently unpredictable results. To see this, think about how the Python
interpreter executes this code:
dfmi.loc[:, ('one', 'second')] = value
# becomes
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)
But this code is handled differently:
dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)
See that __getitem__ in there? Outside of simple cases, it’s very hard to
predict whether it will return a view or a copy (it depends on the memory layout
of the array, about which pandas makes no guarantees), and therefore whether
the __setitem__ will modify dfmi or a temporary object that gets thrown
out immediately afterward. That’s what SettingWithCopy is warning you
about!
Note
You may be wondering whether we should be concerned about the loc
property in the first example. But dfmi.loc is guaranteed to be dfmi
itself with modified indexing behavior, so dfmi.loc.__getitem__ /
dfmi.loc.__setitem__ operate on dfmi directly. Of course,
dfmi.loc.__getitem__(idx) may be a view or a copy of dfmi.
Sometimes a SettingWithCopy warning will arise at times when there’s no
obvious chained indexing going on. These are the bugs that
SettingWithCopy is designed to catch! pandas is probably trying to warn you
that you’ve done this:
def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
# We don't know whether this will modify df or not!
foo['quux'] = value
return foo
Yikes!
Evaluation order matters#
When you use chained indexing, the order and type of the indexing operation
partially determine whether the result is a slice into the original object, or
a copy of the slice.
pandas has the SettingWithCopyWarning because assigning to a copy of a
slice is frequently not intentional, but a mistake caused by chained indexing
returning a copy where a slice was expected.
If you would like pandas to be more or less trusting about assignment to a
chained indexing expression, you can set the option
mode.chained_assignment to one of these values:
'warn', the default, means a SettingWithCopyWarning is printed.
'raise' means pandas will raise a SettingWithCopyError
you have to deal with.
None will suppress the warnings entirely.
In [360]: dfb = pd.DataFrame({'a': ['one', 'one', 'two',
.....: 'three', 'two', 'one', 'six'],
.....: 'c': np.arange(7)})
.....:
# This will show the SettingWithCopyWarning
# but the frame values will be set
In [361]: dfb['c'][dfb['a'].str.startswith('o')] = 42
This however is operating on a copy and will not work.
>>> pd.set_option('mode.chained_assignment','warn')
>>> dfb[dfb['a'].str.startswith('o')]['c'] = 42
Traceback (most recent call last)
...
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
A chained assignment can also crop up in setting in a mixed dtype frame.
Note
These setting rules apply to all of .loc/.iloc.
The following is the recommended access method using .loc for multiple items (using mask) and a single item using a fixed index:
In [362]: dfc = pd.DataFrame({'a': ['one', 'one', 'two',
.....: 'three', 'two', 'one', 'six'],
.....: 'c': np.arange(7)})
.....:
In [363]: dfd = dfc.copy()
# Setting multiple items using a mask
In [364]: mask = dfd['a'].str.startswith('o')
In [365]: dfd.loc[mask, 'c'] = 42
In [366]: dfd
Out[366]:
a c
0 one 42
1 one 42
2 two 2
3 three 3
4 two 4
5 one 42
6 six 6
# Setting a single item
In [367]: dfd = dfc.copy()
In [368]: dfd.loc[2, 'a'] = 11
In [369]: dfd
Out[369]:
a c
0 one 0
1 one 1
2 11 2
3 three 3
4 two 4
5 one 5
6 six 6
The following can work at times, but it is not guaranteed to, and therefore should be avoided:
In [370]: dfd = dfc.copy()
In [371]: dfd['a'][2] = 111
In [372]: dfd
Out[372]:
a c
0 one 0
1 one 1
2 111 2
3 three 3
4 two 4
5 one 5
6 six 6
Last, the subsequent example will not work at all, and so should be avoided:
>>> pd.set_option('mode.chained_assignment','raise')
>>> dfd.loc[0]['a'] = 1111
Traceback (most recent call last)
...
SettingWithCopyError:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
Warning
The chained assignment warnings / exceptions are aiming to inform the user of a possibly invalid
assignment. There may be false positives; situations where a chained assignment is inadvertently
reported.
| 196
| 916
|
Pandas df.loc indexing with multiple conditions and retrieving value
I can't understand this example in the documentation for version 1.1.4
In [45]: df1
Out[45]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
c 1.024180 0.569605 0.875906 -2.211372
d 0.974466 -2.006747 -0.410001 -0.078638
e 0.545952 -1.219217 -1.226825 0.769804
f -1.281247 -0.727707 -0.121306 -0.097883
Then it slices it using labels
In [50]: df1.loc[:, df1.loc['a'] > 0]
Out[50]:
A
a 0.132003
b 1.130127
c 1.024180
d 0.974466
e 0.545952
f -1.281247
But the value at index 'f' is less than zero.
And the condition is on index 'a' so why is it returning the entire first column?
|
63,367,498
|
Python: Line and bar graph with year on X axis?
|
<p>I'm trying to have the years on the X-axis, the volume represented by the bars and the value represented by the line. Can anyone see where I'm going wrong?</p>
<pre><code>import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
data1 = pd.DataFrame({'Year' : [2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019],
'Volume' : [32, 35, 33, 36, 38, 39, 40, 42, 49, 47],
'Value' : [40, 41, 46, 44, 43, 42, 42, 45, 48, 52]})
data1[['Year', 'Volume']].plot(kind='bar', width = 0.5, color="navy")
data1['Value'].plot(secondary_y=True, color='orange')
plt.show()
</code></pre>
| 63,367,514
| 2020-08-11T23:01:23.730000
| 1
| null | 0
| 203
|
python|pandas
|
<p>You want to pass <code>X='Year'</code> into the plot commands.</p>
<pre><code>data1[['Year', 'Volume']].plot(x='Year',kind='bar', width = 0.5, color="navy")
data1['Value'].plot(secondary_y=True, color='orange')
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/mfFfZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mfFfZ.png" alt="enter image description here" /></a></p>
| 2020-08-11T23:03:57.980000
| 1
|
https://pandas.pydata.org/docs/user_guide/visualization.html
|
Chart visualization#
Chart visualization#
Note
The examples below assume that you’re using Jupyter.
This section demonstrates visualization through charting. For information on
visualization of tabular data please see the section on Table Visualization.
We use the standard convention for referencing the matplotlib API:
In [1]: import matplotlib.pyplot as plt
In [2]: plt.close("all")
We provide the basics in pandas to easily create decent looking plots.
See the ecosystem section for visualization
libraries that go beyond the basics documented here.
You want to pass X='Year' into the plot commands.
data1[['Year', 'Volume']].plot(x='Year',kind='bar', width = 0.5, color="navy")
data1['Value'].plot(secondary_y=True, color='orange')
Output:
Note
All calls to np.random are seeded with 123456.
Basic plotting: plot#
We will demonstrate the basics, see the cookbook for
some advanced strategies.
The plot method on Series and DataFrame is just a simple wrapper around
plt.plot():
In [3]: ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
In [4]: ts = ts.cumsum()
In [5]: ts.plot();
If the index consists of dates, it calls gcf().autofmt_xdate()
to try to format the x-axis nicely as per above.
On DataFrame, plot() is a convenience to plot all of the columns with labels:
In [6]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list("ABCD"))
In [7]: df = df.cumsum()
In [8]: plt.figure();
In [9]: df.plot();
You can plot one column versus another using the x and y keywords in
plot():
In [10]: df3 = pd.DataFrame(np.random.randn(1000, 2), columns=["B", "C"]).cumsum()
In [11]: df3["A"] = pd.Series(list(range(len(df))))
In [12]: df3.plot(x="A", y="B");
Note
For more formatting and styling options, see
formatting below.
Other plots#
Plotting methods allow for a handful of plot styles other than the
default line plot. These methods can be provided as the kind
keyword argument to plot(), and include:
‘bar’ or ‘barh’ for bar plots
‘hist’ for histogram
‘box’ for boxplot
‘kde’ or ‘density’ for density plots
‘area’ for area plots
‘scatter’ for scatter plots
‘hexbin’ for hexagonal bin plots
‘pie’ for pie plots
For example, a bar plot can be created the following way:
In [13]: plt.figure();
In [14]: df.iloc[5].plot(kind="bar");
You can also create these other plots using the methods DataFrame.plot.<kind> instead of providing the kind keyword argument. This makes it easier to discover plot methods and the specific arguments they use:
In [15]: df = pd.DataFrame()
In [16]: df.plot.<TAB> # noqa: E225, E999
df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line df.plot.scatter
df.plot.bar df.plot.box df.plot.hexbin df.plot.kde df.plot.pie
In addition to these kind s, there are the DataFrame.hist(),
and DataFrame.boxplot() methods, which use a separate interface.
Finally, there are several plotting functions in pandas.plotting
that take a Series or DataFrame as an argument. These
include:
Scatter Matrix
Andrews Curves
Parallel Coordinates
Lag Plot
Autocorrelation Plot
Bootstrap Plot
RadViz
Plots may also be adorned with errorbars
or tables.
Bar plots#
For labeled, non-time series data, you may wish to produce a bar plot:
In [17]: plt.figure();
In [18]: df.iloc[5].plot.bar();
In [19]: plt.axhline(0, color="k");
Calling a DataFrame’s plot.bar() method produces a multiple
bar plot:
In [20]: df2 = pd.DataFrame(np.random.rand(10, 4), columns=["a", "b", "c", "d"])
In [21]: df2.plot.bar();
To produce a stacked bar plot, pass stacked=True:
In [22]: df2.plot.bar(stacked=True);
To get horizontal bar plots, use the barh method:
In [23]: df2.plot.barh(stacked=True);
Histograms#
Histograms can be drawn by using the DataFrame.plot.hist() and Series.plot.hist() methods.
In [24]: df4 = pd.DataFrame(
....: {
....: "a": np.random.randn(1000) + 1,
....: "b": np.random.randn(1000),
....: "c": np.random.randn(1000) - 1,
....: },
....: columns=["a", "b", "c"],
....: )
....:
In [25]: plt.figure();
In [26]: df4.plot.hist(alpha=0.5);
A histogram can be stacked using stacked=True. Bin size can be changed
using the bins keyword.
In [27]: plt.figure();
In [28]: df4.plot.hist(stacked=True, bins=20);
You can pass other keywords supported by matplotlib hist. For example,
horizontal and cumulative histograms can be drawn by
orientation='horizontal' and cumulative=True.
In [29]: plt.figure();
In [30]: df4["a"].plot.hist(orientation="horizontal", cumulative=True);
See the hist method and the
matplotlib hist documentation for more.
The existing interface DataFrame.hist to plot histogram still can be used.
In [31]: plt.figure();
In [32]: df["A"].diff().hist();
DataFrame.hist() plots the histograms of the columns on multiple
subplots:
In [33]: plt.figure();
In [34]: df.diff().hist(color="k", alpha=0.5, bins=50);
The by keyword can be specified to plot grouped histograms:
In [35]: data = pd.Series(np.random.randn(1000))
In [36]: data.hist(by=np.random.randint(0, 4, 1000), figsize=(6, 4));
In addition, the by keyword can also be specified in DataFrame.plot.hist().
Changed in version 1.4.0.
In [37]: data = pd.DataFrame(
....: {
....: "a": np.random.choice(["x", "y", "z"], 1000),
....: "b": np.random.choice(["e", "f", "g"], 1000),
....: "c": np.random.randn(1000),
....: "d": np.random.randn(1000) - 1,
....: },
....: )
....:
In [38]: data.plot.hist(by=["a", "b"], figsize=(10, 5));
Box plots#
Boxplot can be drawn calling Series.plot.box() and DataFrame.plot.box(),
or DataFrame.boxplot() to visualize the distribution of values within each column.
For instance, here is a boxplot representing five trials of 10 observations of
a uniform random variable on [0,1).
In [39]: df = pd.DataFrame(np.random.rand(10, 5), columns=["A", "B", "C", "D", "E"])
In [40]: df.plot.box();
Boxplot can be colorized by passing color keyword. You can pass a dict
whose keys are boxes, whiskers, medians and caps.
If some keys are missing in the dict, default colors are used
for the corresponding artists. Also, boxplot has sym keyword to specify fliers style.
When you pass other type of arguments via color keyword, it will be directly
passed to matplotlib for all the boxes, whiskers, medians and caps
colorization.
The colors are applied to every boxes to be drawn. If you want
more complicated colorization, you can get each drawn artists by passing
return_type.
In [41]: color = {
....: "boxes": "DarkGreen",
....: "whiskers": "DarkOrange",
....: "medians": "DarkBlue",
....: "caps": "Gray",
....: }
....:
In [42]: df.plot.box(color=color, sym="r+");
Also, you can pass other keywords supported by matplotlib boxplot.
For example, horizontal and custom-positioned boxplot can be drawn by
vert=False and positions keywords.
In [43]: df.plot.box(vert=False, positions=[1, 4, 5, 6, 8]);
See the boxplot method and the
matplotlib boxplot documentation for more.
The existing interface DataFrame.boxplot to plot boxplot still can be used.
In [44]: df = pd.DataFrame(np.random.rand(10, 5))
In [45]: plt.figure();
In [46]: bp = df.boxplot()
You can create a stratified boxplot using the by keyword argument to create
groupings. For instance,
In [47]: df = pd.DataFrame(np.random.rand(10, 2), columns=["Col1", "Col2"])
In [48]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [49]: plt.figure();
In [50]: bp = df.boxplot(by="X")
You can also pass a subset of columns to plot, as well as group by multiple
columns:
In [51]: df = pd.DataFrame(np.random.rand(10, 3), columns=["Col1", "Col2", "Col3"])
In [52]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [53]: df["Y"] = pd.Series(["A", "B", "A", "B", "A", "B", "A", "B", "A", "B"])
In [54]: plt.figure();
In [55]: bp = df.boxplot(column=["Col1", "Col2"], by=["X", "Y"])
You could also create groupings with DataFrame.plot.box(), for instance:
Changed in version 1.4.0.
In [56]: df = pd.DataFrame(np.random.rand(10, 3), columns=["Col1", "Col2", "Col3"])
In [57]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [58]: plt.figure();
In [59]: bp = df.plot.box(column=["Col1", "Col2"], by="X")
In boxplot, the return type can be controlled by the return_type, keyword. The valid choices are {"axes", "dict", "both", None}.
Faceting, created by DataFrame.boxplot with the by
keyword, will affect the output type as well:
return_type
Faceted
Output type
None
No
axes
None
Yes
2-D ndarray of axes
'axes'
No
axes
'axes'
Yes
Series of axes
'dict'
No
dict of artists
'dict'
Yes
Series of dicts of artists
'both'
No
namedtuple
'both'
Yes
Series of namedtuples
Groupby.boxplot always returns a Series of return_type.
In [60]: np.random.seed(1234)
In [61]: df_box = pd.DataFrame(np.random.randn(50, 2))
In [62]: df_box["g"] = np.random.choice(["A", "B"], size=50)
In [63]: df_box.loc[df_box["g"] == "B", 1] += 3
In [64]: bp = df_box.boxplot(by="g")
The subplots above are split by the numeric columns first, then the value of
the g column. Below the subplots are first split by the value of g,
then by the numeric columns.
In [65]: bp = df_box.groupby("g").boxplot()
Area plot#
You can create area plots with Series.plot.area() and DataFrame.plot.area().
Area plots are stacked by default. To produce stacked area plot, each column must be either all positive or all negative values.
When input data contains NaN, it will be automatically filled by 0. If you want to drop or fill by different values, use dataframe.dropna() or dataframe.fillna() before calling plot.
In [66]: df = pd.DataFrame(np.random.rand(10, 4), columns=["a", "b", "c", "d"])
In [67]: df.plot.area();
To produce an unstacked plot, pass stacked=False. Alpha value is set to 0.5 unless otherwise specified:
In [68]: df.plot.area(stacked=False);
Scatter plot#
Scatter plot can be drawn by using the DataFrame.plot.scatter() method.
Scatter plot requires numeric columns for the x and y axes.
These can be specified by the x and y keywords.
In [69]: df = pd.DataFrame(np.random.rand(50, 4), columns=["a", "b", "c", "d"])
In [70]: df["species"] = pd.Categorical(
....: ["setosa"] * 20 + ["versicolor"] * 20 + ["virginica"] * 10
....: )
....:
In [71]: df.plot.scatter(x="a", y="b");
To plot multiple column groups in a single axes, repeat plot method specifying target ax.
It is recommended to specify color and label keywords to distinguish each groups.
In [72]: ax = df.plot.scatter(x="a", y="b", color="DarkBlue", label="Group 1")
In [73]: df.plot.scatter(x="c", y="d", color="DarkGreen", label="Group 2", ax=ax);
The keyword c may be given as the name of a column to provide colors for
each point:
In [74]: df.plot.scatter(x="a", y="b", c="c", s=50);
If a categorical column is passed to c, then a discrete colorbar will be produced:
New in version 1.3.0.
In [75]: df.plot.scatter(x="a", y="b", c="species", cmap="viridis", s=50);
You can pass other keywords supported by matplotlib
scatter. The example below shows a
bubble chart using a column of the DataFrame as the bubble size.
In [76]: df.plot.scatter(x="a", y="b", s=df["c"] * 200);
See the scatter method and the
matplotlib scatter documentation for more.
Hexagonal bin plot#
You can create hexagonal bin plots with DataFrame.plot.hexbin().
Hexbin plots can be a useful alternative to scatter plots if your data are
too dense to plot each point individually.
In [77]: df = pd.DataFrame(np.random.randn(1000, 2), columns=["a", "b"])
In [78]: df["b"] = df["b"] + np.arange(1000)
In [79]: df.plot.hexbin(x="a", y="b", gridsize=25);
A useful keyword argument is gridsize; it controls the number of hexagons
in the x-direction, and defaults to 100. A larger gridsize means more, smaller
bins.
By default, a histogram of the counts around each (x, y) point is computed.
You can specify alternative aggregations by passing values to the C and
reduce_C_function arguments. C specifies the value at each (x, y) point
and reduce_C_function is a function of one argument that reduces all the
values in a bin to a single number (e.g. mean, max, sum, std). In this
example the positions are given by columns a and b, while the value is
given by column z. The bins are aggregated with NumPy’s max function.
In [80]: df = pd.DataFrame(np.random.randn(1000, 2), columns=["a", "b"])
In [81]: df["b"] = df["b"] + np.arange(1000)
In [82]: df["z"] = np.random.uniform(0, 3, 1000)
In [83]: df.plot.hexbin(x="a", y="b", C="z", reduce_C_function=np.max, gridsize=25);
See the hexbin method and the
matplotlib hexbin documentation for more.
Pie plot#
You can create a pie plot with DataFrame.plot.pie() or Series.plot.pie().
If your data includes any NaN, they will be automatically filled with 0.
A ValueError will be raised if there are any negative values in your data.
In [84]: series = pd.Series(3 * np.random.rand(4), index=["a", "b", "c", "d"], name="series")
In [85]: series.plot.pie(figsize=(6, 6));
For pie plots it’s best to use square figures, i.e. a figure aspect ratio 1.
You can create the figure with equal width and height, or force the aspect ratio
to be equal after plotting by calling ax.set_aspect('equal') on the returned
axes object.
Note that pie plot with DataFrame requires that you either specify a
target column by the y argument or subplots=True. When y is
specified, pie plot of selected column will be drawn. If subplots=True is
specified, pie plots for each column are drawn as subplots. A legend will be
drawn in each pie plots by default; specify legend=False to hide it.
In [86]: df = pd.DataFrame(
....: 3 * np.random.rand(4, 2), index=["a", "b", "c", "d"], columns=["x", "y"]
....: )
....:
In [87]: df.plot.pie(subplots=True, figsize=(8, 4));
You can use the labels and colors keywords to specify the labels and colors of each wedge.
Warning
Most pandas plots use the label and color arguments (note the lack of “s” on those).
To be consistent with matplotlib.pyplot.pie() you must use labels and colors.
If you want to hide wedge labels, specify labels=None.
If fontsize is specified, the value will be applied to wedge labels.
Also, other keywords supported by matplotlib.pyplot.pie() can be used.
In [88]: series.plot.pie(
....: labels=["AA", "BB", "CC", "DD"],
....: colors=["r", "g", "b", "c"],
....: autopct="%.2f",
....: fontsize=20,
....: figsize=(6, 6),
....: );
....:
If you pass values whose sum total is less than 1.0 they will be rescaled so that they sum to 1.
In [89]: series = pd.Series([0.1] * 4, index=["a", "b", "c", "d"], name="series2")
In [90]: series.plot.pie(figsize=(6, 6));
See the matplotlib pie documentation for more.
Plotting with missing data#
pandas tries to be pragmatic about plotting DataFrames or Series
that contain missing data. Missing values are dropped, left out, or filled
depending on the plot type.
Plot Type
NaN Handling
Line
Leave gaps at NaNs
Line (stacked)
Fill 0’s
Bar
Fill 0’s
Scatter
Drop NaNs
Histogram
Drop NaNs (column-wise)
Box
Drop NaNs (column-wise)
Area
Fill 0’s
KDE
Drop NaNs (column-wise)
Hexbin
Drop NaNs
Pie
Fill 0’s
If any of these defaults are not what you want, or if you want to be
explicit about how missing values are handled, consider using
fillna() or dropna()
before plotting.
Plotting tools#
These functions can be imported from pandas.plotting
and take a Series or DataFrame as an argument.
Scatter matrix plot#
You can create a scatter plot matrix using the
scatter_matrix method in pandas.plotting:
In [91]: from pandas.plotting import scatter_matrix
In [92]: df = pd.DataFrame(np.random.randn(1000, 4), columns=["a", "b", "c", "d"])
In [93]: scatter_matrix(df, alpha=0.2, figsize=(6, 6), diagonal="kde");
Density plot#
You can create density plots using the Series.plot.kde() and DataFrame.plot.kde() methods.
In [94]: ser = pd.Series(np.random.randn(1000))
In [95]: ser.plot.kde();
Andrews curves#
Andrews curves allow one to plot multivariate data as a large number
of curves that are created using the attributes of samples as coefficients
for Fourier series, see the Wikipedia entry
for more information. By coloring these curves differently for each class
it is possible to visualize data clustering. Curves belonging to samples
of the same class will usually be closer together and form larger structures.
Note: The “Iris” dataset is available here.
In [96]: from pandas.plotting import andrews_curves
In [97]: data = pd.read_csv("data/iris.data")
In [98]: plt.figure();
In [99]: andrews_curves(data, "Name");
Parallel coordinates#
Parallel coordinates is a plotting technique for plotting multivariate data,
see the Wikipedia entry
for an introduction.
Parallel coordinates allows one to see clusters in data and to estimate other statistics visually.
Using parallel coordinates points are represented as connected line segments.
Each vertical line represents one attribute. One set of connected line segments
represents one data point. Points that tend to cluster will appear closer together.
In [100]: from pandas.plotting import parallel_coordinates
In [101]: data = pd.read_csv("data/iris.data")
In [102]: plt.figure();
In [103]: parallel_coordinates(data, "Name");
Lag plot#
Lag plots are used to check if a data set or time series is random. Random
data should not exhibit any structure in the lag plot. Non-random structure
implies that the underlying data are not random. The lag argument may
be passed, and when lag=1 the plot is essentially data[:-1] vs.
data[1:].
In [104]: from pandas.plotting import lag_plot
In [105]: plt.figure();
In [106]: spacing = np.linspace(-99 * np.pi, 99 * np.pi, num=1000)
In [107]: data = pd.Series(0.1 * np.random.rand(1000) + 0.9 * np.sin(spacing))
In [108]: lag_plot(data);
Autocorrelation plot#
Autocorrelation plots are often used for checking randomness in time series.
This is done by computing autocorrelations for data values at varying time lags.
If time series is random, such autocorrelations should be near zero for any and
all time-lag separations. If time series is non-random then one or more of the
autocorrelations will be significantly non-zero. The horizontal lines displayed
in the plot correspond to 95% and 99% confidence bands. The dashed line is 99%
confidence band. See the
Wikipedia entry for more about
autocorrelation plots.
In [109]: from pandas.plotting import autocorrelation_plot
In [110]: plt.figure();
In [111]: spacing = np.linspace(-9 * np.pi, 9 * np.pi, num=1000)
In [112]: data = pd.Series(0.7 * np.random.rand(1000) + 0.3 * np.sin(spacing))
In [113]: autocorrelation_plot(data);
Bootstrap plot#
Bootstrap plots are used to visually assess the uncertainty of a statistic, such
as mean, median, midrange, etc. A random subset of a specified size is selected
from a data set, the statistic in question is computed for this subset and the
process is repeated a specified number of times. Resulting plots and histograms
are what constitutes the bootstrap plot.
In [114]: from pandas.plotting import bootstrap_plot
In [115]: data = pd.Series(np.random.rand(1000))
In [116]: bootstrap_plot(data, size=50, samples=500, color="grey");
RadViz#
RadViz is a way of visualizing multi-variate data. It is based on a simple
spring tension minimization algorithm. Basically you set up a bunch of points in
a plane. In our case they are equally spaced on a unit circle. Each point
represents a single attribute. You then pretend that each sample in the data set
is attached to each of these points by a spring, the stiffness of which is
proportional to the numerical value of that attribute (they are normalized to
unit interval). The point in the plane, where our sample settles to (where the
forces acting on our sample are at an equilibrium) is where a dot representing
our sample will be drawn. Depending on which class that sample belongs it will
be colored differently.
See the R package Radviz
for more information.
Note: The “Iris” dataset is available here.
In [117]: from pandas.plotting import radviz
In [118]: data = pd.read_csv("data/iris.data")
In [119]: plt.figure();
In [120]: radviz(data, "Name");
Plot formatting#
Setting the plot style#
From version 1.5 and up, matplotlib offers a range of pre-configured plotting styles. Setting the
style can be used to easily give plots the general look that you want.
Setting the style is as easy as calling matplotlib.style.use(my_plot_style) before
creating your plot. For example you could write matplotlib.style.use('ggplot') for ggplot-style
plots.
You can see the various available style names at matplotlib.style.available and it’s very
easy to try them out.
General plot style arguments#
Most plotting methods have a set of keyword arguments that control the
layout and formatting of the returned plot:
In [121]: plt.figure();
In [122]: ts.plot(style="k--", label="Series");
For each kind of plot (e.g. line, bar, scatter) any additional arguments
keywords are passed along to the corresponding matplotlib function
(ax.plot(),
ax.bar(),
ax.scatter()). These can be used
to control additional styling, beyond what pandas provides.
Controlling the legend#
You may set the legend argument to False to hide the legend, which is
shown by default.
In [123]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list("ABCD"))
In [124]: df = df.cumsum()
In [125]: df.plot(legend=False);
Controlling the labels#
New in version 1.1.0.
You may set the xlabel and ylabel arguments to give the plot custom labels
for x and y axis. By default, pandas will pick up index name as xlabel, while leaving
it empty for ylabel.
In [126]: df.plot();
In [127]: df.plot(xlabel="new x", ylabel="new y");
Scales#
You may pass logy to get a log-scale Y axis.
In [128]: ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
In [129]: ts = np.exp(ts.cumsum())
In [130]: ts.plot(logy=True);
See also the logx and loglog keyword arguments.
Plotting on a secondary y-axis#
To plot data on a secondary y-axis, use the secondary_y keyword:
In [131]: df["A"].plot();
In [132]: df["B"].plot(secondary_y=True, style="g");
To plot some columns in a DataFrame, give the column names to the secondary_y
keyword:
In [133]: plt.figure();
In [134]: ax = df.plot(secondary_y=["A", "B"])
In [135]: ax.set_ylabel("CD scale");
In [136]: ax.right_ax.set_ylabel("AB scale");
Note that the columns plotted on the secondary y-axis is automatically marked
with “(right)” in the legend. To turn off the automatic marking, use the
mark_right=False keyword:
In [137]: plt.figure();
In [138]: df.plot(secondary_y=["A", "B"], mark_right=False);
Custom formatters for timeseries plots#
Changed in version 1.0.0.
pandas provides custom formatters for timeseries plots. These change the
formatting of the axis labels for dates and times. By default,
the custom formatters are applied only to plots created by pandas with
DataFrame.plot() or Series.plot(). To have them apply to all
plots, including those made by matplotlib, set the option
pd.options.plotting.matplotlib.register_converters = True or use
pandas.plotting.register_matplotlib_converters().
Suppressing tick resolution adjustment#
pandas includes automatic tick resolution adjustment for regular frequency
time-series data. For limited cases where pandas cannot infer the frequency
information (e.g., in an externally created twinx), you can choose to
suppress this behavior for alignment purposes.
Here is the default behavior, notice how the x-axis tick labeling is performed:
In [139]: plt.figure();
In [140]: df["A"].plot();
Using the x_compat parameter, you can suppress this behavior:
In [141]: plt.figure();
In [142]: df["A"].plot(x_compat=True);
If you have more than one plot that needs to be suppressed, the use method
in pandas.plotting.plot_params can be used in a with statement:
In [143]: plt.figure();
In [144]: with pd.plotting.plot_params.use("x_compat", True):
.....: df["A"].plot(color="r")
.....: df["B"].plot(color="g")
.....: df["C"].plot(color="b")
.....:
Automatic date tick adjustment#
TimedeltaIndex now uses the native matplotlib
tick locator methods, it is useful to call the automatic
date tick adjustment from matplotlib for figures whose ticklabels overlap.
See the autofmt_xdate method and the
matplotlib documentation for more.
Subplots#
Each Series in a DataFrame can be plotted on a different axis
with the subplots keyword:
In [145]: df.plot(subplots=True, figsize=(6, 6));
Using layout and targeting multiple axes#
The layout of subplots can be specified by the layout keyword. It can accept
(rows, columns). The layout keyword can be used in
hist and boxplot also. If the input is invalid, a ValueError will be raised.
The number of axes which can be contained by rows x columns specified by layout must be
larger than the number of required subplots. If layout can contain more axes than required,
blank axes are not drawn. Similar to a NumPy array’s reshape method, you
can use -1 for one dimension to automatically calculate the number of rows
or columns needed, given the other.
In [146]: df.plot(subplots=True, layout=(2, 3), figsize=(6, 6), sharex=False);
The above example is identical to using:
In [147]: df.plot(subplots=True, layout=(2, -1), figsize=(6, 6), sharex=False);
The required number of columns (3) is inferred from the number of series to plot
and the given number of rows (2).
You can pass multiple axes created beforehand as list-like via ax keyword.
This allows more complicated layouts.
The passed axes must be the same number as the subplots being drawn.
When multiple axes are passed via the ax keyword, layout, sharex and sharey keywords
don’t affect to the output. You should explicitly pass sharex=False and sharey=False,
otherwise you will see a warning.
In [148]: fig, axes = plt.subplots(4, 4, figsize=(9, 9))
In [149]: plt.subplots_adjust(wspace=0.5, hspace=0.5)
In [150]: target1 = [axes[0][0], axes[1][1], axes[2][2], axes[3][3]]
In [151]: target2 = [axes[3][0], axes[2][1], axes[1][2], axes[0][3]]
In [152]: df.plot(subplots=True, ax=target1, legend=False, sharex=False, sharey=False);
In [153]: (-df).plot(subplots=True, ax=target2, legend=False, sharex=False, sharey=False);
Another option is passing an ax argument to Series.plot() to plot on a particular axis:
In [154]: fig, axes = plt.subplots(nrows=2, ncols=2)
In [155]: plt.subplots_adjust(wspace=0.2, hspace=0.5)
In [156]: df["A"].plot(ax=axes[0, 0]);
In [157]: axes[0, 0].set_title("A");
In [158]: df["B"].plot(ax=axes[0, 1]);
In [159]: axes[0, 1].set_title("B");
In [160]: df["C"].plot(ax=axes[1, 0]);
In [161]: axes[1, 0].set_title("C");
In [162]: df["D"].plot(ax=axes[1, 1]);
In [163]: axes[1, 1].set_title("D");
Plotting with error bars#
Plotting with error bars is supported in DataFrame.plot() and Series.plot().
Horizontal and vertical error bars can be supplied to the xerr and yerr keyword arguments to plot(). The error values can be specified using a variety of formats:
As a DataFrame or dict of errors with column names matching the columns attribute of the plotting DataFrame or matching the name attribute of the Series.
As a str indicating which of the columns of plotting DataFrame contain the error values.
As raw values (list, tuple, or np.ndarray). Must be the same length as the plotting DataFrame/Series.
Here is an example of one way to easily plot group means with standard deviations from the raw data.
# Generate the data
In [164]: ix3 = pd.MultiIndex.from_arrays(
.....: [
.....: ["a", "a", "a", "a", "a", "b", "b", "b", "b", "b"],
.....: ["foo", "foo", "foo", "bar", "bar", "foo", "foo", "bar", "bar", "bar"],
.....: ],
.....: names=["letter", "word"],
.....: )
.....:
In [165]: df3 = pd.DataFrame(
.....: {
.....: "data1": [9, 3, 2, 4, 3, 2, 4, 6, 3, 2],
.....: "data2": [9, 6, 5, 7, 5, 4, 5, 6, 5, 1],
.....: },
.....: index=ix3,
.....: )
.....:
# Group by index labels and take the means and standard deviations
# for each group
In [166]: gp3 = df3.groupby(level=("letter", "word"))
In [167]: means = gp3.mean()
In [168]: errors = gp3.std()
In [169]: means
Out[169]:
data1 data2
letter word
a bar 3.500000 6.000000
foo 4.666667 6.666667
b bar 3.666667 4.000000
foo 3.000000 4.500000
In [170]: errors
Out[170]:
data1 data2
letter word
a bar 0.707107 1.414214
foo 3.785939 2.081666
b bar 2.081666 2.645751
foo 1.414214 0.707107
# Plot
In [171]: fig, ax = plt.subplots()
In [172]: means.plot.bar(yerr=errors, ax=ax, capsize=4, rot=0);
Asymmetrical error bars are also supported, however raw error values must be provided in this case. For a N length Series, a 2xN array should be provided indicating lower and upper (or left and right) errors. For a MxN DataFrame, asymmetrical errors should be in a Mx2xN array.
Here is an example of one way to plot the min/max range using asymmetrical error bars.
In [173]: mins = gp3.min()
In [174]: maxs = gp3.max()
# errors should be positive, and defined in the order of lower, upper
In [175]: errors = [[means[c] - mins[c], maxs[c] - means[c]] for c in df3.columns]
# Plot
In [176]: fig, ax = plt.subplots()
In [177]: means.plot.bar(yerr=errors, ax=ax, capsize=4, rot=0);
Plotting tables#
Plotting with matplotlib table is now supported in DataFrame.plot() and Series.plot() with a table keyword. The table keyword can accept bool, DataFrame or Series. The simple way to draw a table is to specify table=True. Data will be transposed to meet matplotlib’s default layout.
In [178]: fig, ax = plt.subplots(1, 1, figsize=(7, 6.5))
In [179]: df = pd.DataFrame(np.random.rand(5, 3), columns=["a", "b", "c"])
In [180]: ax.xaxis.tick_top() # Display x-axis ticks on top.
In [181]: df.plot(table=True, ax=ax);
Also, you can pass a different DataFrame or Series to the
table keyword. The data will be drawn as displayed in print method
(not transposed automatically). If required, it should be transposed manually
as seen in the example below.
In [182]: fig, ax = plt.subplots(1, 1, figsize=(7, 6.75))
In [183]: ax.xaxis.tick_top() # Display x-axis ticks on top.
In [184]: df.plot(table=np.round(df.T, 2), ax=ax);
There also exists a helper function pandas.plotting.table, which creates a
table from DataFrame or Series, and adds it to an
matplotlib.Axes instance. This function can accept keywords which the
matplotlib table has.
In [185]: from pandas.plotting import table
In [186]: fig, ax = plt.subplots(1, 1)
In [187]: table(ax, np.round(df.describe(), 2), loc="upper right", colWidths=[0.2, 0.2, 0.2]);
In [188]: df.plot(ax=ax, ylim=(0, 2), legend=None);
Note: You can get table instances on the axes using axes.tables property for further decorations. See the matplotlib table documentation for more.
Colormaps#
A potential issue when plotting a large number of columns is that it can be
difficult to distinguish some series due to repetition in the default colors. To
remedy this, DataFrame plotting supports the use of the colormap argument,
which accepts either a Matplotlib colormap
or a string that is a name of a colormap registered with Matplotlib. A
visualization of the default matplotlib colormaps is available here.
As matplotlib does not directly support colormaps for line-based plots, the
colors are selected based on an even spacing determined by the number of columns
in the DataFrame. There is no consideration made for background color, so some
colormaps will produce lines that are not easily visible.
To use the cubehelix colormap, we can pass colormap='cubehelix'.
In [189]: df = pd.DataFrame(np.random.randn(1000, 10), index=ts.index)
In [190]: df = df.cumsum()
In [191]: plt.figure();
In [192]: df.plot(colormap="cubehelix");
Alternatively, we can pass the colormap itself:
In [193]: from matplotlib import cm
In [194]: plt.figure();
In [195]: df.plot(colormap=cm.cubehelix);
Colormaps can also be used other plot types, like bar charts:
In [196]: dd = pd.DataFrame(np.random.randn(10, 10)).applymap(abs)
In [197]: dd = dd.cumsum()
In [198]: plt.figure();
In [199]: dd.plot.bar(colormap="Greens");
Parallel coordinates charts:
In [200]: plt.figure();
In [201]: parallel_coordinates(data, "Name", colormap="gist_rainbow");
Andrews curves charts:
In [202]: plt.figure();
In [203]: andrews_curves(data, "Name", colormap="winter");
Plotting directly with Matplotlib#
In some situations it may still be preferable or necessary to prepare plots
directly with matplotlib, for instance when a certain type of plot or
customization is not (yet) supported by pandas. Series and DataFrame
objects behave like arrays and can therefore be passed directly to
matplotlib functions without explicit casts.
pandas also automatically registers formatters and locators that recognize date
indices, thereby extending date and time support to practically all plot types
available in matplotlib. Although this formatting does not provide the same
level of refinement you would get when plotting via pandas, it can be faster
when plotting a large number of points.
In [204]: price = pd.Series(
.....: np.random.randn(150).cumsum(),
.....: index=pd.date_range("2000-1-1", periods=150, freq="B"),
.....: )
.....:
In [205]: ma = price.rolling(20).mean()
In [206]: mstd = price.rolling(20).std()
In [207]: plt.figure();
In [208]: plt.plot(price.index, price, "k");
In [209]: plt.plot(ma.index, ma, "b");
In [210]: plt.fill_between(mstd.index, ma - 2 * mstd, ma + 2 * mstd, color="b", alpha=0.2);
Plotting backends#
Starting in version 0.25, pandas can be extended with third-party plotting backends. The
main idea is letting users select a plotting backend different than the provided
one based on Matplotlib.
This can be done by passing ‘backend.module’ as the argument backend in plot
function. For example:
>>> Series([1, 2, 3]).plot(backend="backend.module")
Alternatively, you can also set this option globally, do you don’t need to specify
the keyword in each plot call. For example:
>>> pd.set_option("plotting.backend", "backend.module")
>>> pd.Series([1, 2, 3]).plot()
Or:
>>> pd.options.plotting.backend = "backend.module"
>>> pd.Series([1, 2, 3]).plot()
This would be more or less equivalent to:
>>> import backend.module
>>> backend.module.plot(pd.Series([1, 2, 3]))
The backend module can then use other visualization tools (Bokeh, Altair, hvplot,…)
to generate the plots. Some libraries implementing a backend for pandas are listed
on the ecosystem Visualization page.
Developers guide can be found at
https://pandas.pydata.org/docs/dev/development/extending.html#plotting-backends
| 561
| 753
|
Python: Line and bar graph with year on X axis?
I'm trying to have the years on the X-axis, the volume represented by the bars and the value represented by the line. Can anyone see where I'm going wrong?
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
data1 = pd.DataFrame({'Year' : [2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019],
'Volume' : [32, 35, 33, 36, 38, 39, 40, 42, 49, 47],
'Value' : [40, 41, 46, 44, 43, 42, 42, 45, 48, 52]})
data1[['Year', 'Volume']].plot(kind='bar', width = 0.5, color="navy")
data1['Value'].plot(secondary_y=True, color='orange')
plt.show()
|
64,850,973
|
remove points located within a specific area - python
|
<p>I'm trying to remove points that are located within a specific area. Using below, I'm hoping to remove points that are located within the blue box. Ideally, I'd map out a polygon that followed the contour of the circle more closely. This is just a rough description.</p>
<p>I'm currently applying a crude subset to the y-coordinates:</p>
<pre><code>df = df[df['A'] > 0]
df = df[df['C'] > 0]
</code></pre>
<p>While this gets the majority of points, I'm hoping to improve the method.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
df = pd.DataFrame(np.random.randint(-25,125,size=(500, 4)), columns=list('ABCD'))
fig, ax = plt.subplots()
ax.set_xlim(-50, 150)
ax.set_ylim(-50, 150)
x = ([1.25,0.5,-0.25,0.5,1.25,-10,-10,1.25])
y = ([75,62.5,50,37.5,25,25,75,75])
plt.plot(x,y)
A = df['A']
B = df['B']
C = df['C']
D = df['D']
plt.scatter(df['A'], df['B'], color = 'purple', alpha = 0.2);
plt.scatter(df['C'], df['D'], color = 'orange', alpha = 0.2);
Oval_patch = mpl.patches.Ellipse((50,50), 100, 150, color = 'k', fill = False)
ax.add_patch(Oval_patch)
</code></pre>
| 64,853,958
| 2020-11-15T23:45:51.793000
| 1
| null | 1
| 206
|
python|pandas
|
<p>I'd suggest to store your polygon (the <code><Line2D object></code>) in a variable like this:</p>
<pre><code>line = plt.plot(x,y)
</code></pre>
<p>Which enables you to utilise the <a href="https://matplotlib.org/3.3.2/api/_as_gen/matplotlib.lines.Line2D.html#matplotlib.lines.Line2D.get_path" rel="nofollow noreferrer">get_path() method</a> to get the underlying path object, which has a builtin <a href="https://matplotlib.org/3.3.2/api/path_api.html#matplotlib.path.Path.contains_points" rel="nofollow noreferrer">contains_points() method</a>.</p>
<p>Next, include the following lines:</p>
<pre><code>pts = df[['C','D']]
mask = line[0].get_path().contains_points(pts)
plt.scatter(df['C'][mask],df['D'][mask],color="red",zorder=10)
</code></pre>
<p>and you should be able to verify that the correct points are selected by the mask:</p>
<p><a href="https://i.stack.imgur.com/Ec1eV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ec1eV.png" alt="masked points highlighted in red" /></a></p>
<p>Now, to remove the points within your selected region, you can use <code>~mask</code> to negate the boolean mask, i.e. use:</p>
<pre><code>pts = df[['A','B']]
mask_ab = line[0].get_path().contains_points(pts)
pts = df[['C','D']]
mask_cd = line[0].get_path().contains_points(pts)
plt.scatter(df['A'][~mask_ab], df['B'][~mask_ab], color = 'purple', alpha = 0.2)
plt.scatter(df['C'][~mask_cd], df['D'][~mask_cd], color = 'orange', alpha = 0.2)
</code></pre>
<p>to get this result:</p>
<p><a href="https://i.stack.imgur.com/XZQ0F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XZQ0F.png" alt="masked points removed" /></a></p>
| 2020-11-16T07:14:19.020000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.drop.html
|
pandas.Series.drop#
pandas.Series.drop#
Series.drop(labels=None, *, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise')[source]#
Return Series with specified index labels removed.
Remove elements of a Series based on specifying the index labels.
When using a multi-index, labels on different levels can be removed
I'd suggest to store your polygon (the <Line2D object>) in a variable like this:
line = plt.plot(x,y)
Which enables you to utilise the get_path() method to get the underlying path object, which has a builtin contains_points() method.
Next, include the following lines:
pts = df[['C','D']]
mask = line[0].get_path().contains_points(pts)
plt.scatter(df['C'][mask],df['D'][mask],color="red",zorder=10)
and you should be able to verify that the correct points are selected by the mask:
Now, to remove the points within your selected region, you can use ~mask to negate the boolean mask, i.e. use:
pts = df[['A','B']]
mask_ab = line[0].get_path().contains_points(pts)
pts = df[['C','D']]
mask_cd = line[0].get_path().contains_points(pts)
plt.scatter(df['A'][~mask_ab], df['B'][~mask_ab], color = 'purple', alpha = 0.2)
plt.scatter(df['C'][~mask_cd], df['D'][~mask_cd], color = 'orange', alpha = 0.2)
to get this result:
by specifying the level.
Parameters
labelssingle label or list-likeIndex labels to drop.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
indexsingle label or list-likeRedundant for application on Series, but ‘index’ can be used instead
of ‘labels’.
columnssingle label or list-likeNo change is made to the Series; use ‘index’ or ‘labels’ instead.
levelint or level name, optionalFor MultiIndex, level for which the labels will be removed.
inplacebool, default FalseIf True, do operation inplace and return None.
errors{‘ignore’, ‘raise’}, default ‘raise’If ‘ignore’, suppress error and only existing labels are dropped.
Returns
Series or NoneSeries with specified index labels removed or None if inplace=True.
Raises
KeyErrorIf none of the labels are found in the index.
See also
Series.reindexReturn only specified index labels of Series.
Series.dropnaReturn series without null values.
Series.drop_duplicatesReturn Series with duplicate values removed.
DataFrame.dropDrop specified labels from rows or columns.
Examples
>>> s = pd.Series(data=np.arange(3), index=['A', 'B', 'C'])
>>> s
A 0
B 1
C 2
dtype: int64
Drop labels B en C
>>> s.drop(labels=['B', 'C'])
A 0
dtype: int64
Drop 2nd level label in MultiIndex Series
>>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
... ['speed', 'weight', 'length']],
... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
... [0, 1, 2, 0, 1, 2, 0, 1, 2]])
>>> s = pd.Series([45, 200, 1.2, 30, 250, 1.5, 320, 1, 0.3],
... index=midx)
>>> s
lama speed 45.0
weight 200.0
length 1.2
cow speed 30.0
weight 250.0
length 1.5
falcon speed 320.0
weight 1.0
length 0.3
dtype: float64
>>> s.drop(labels='weight', level=1)
lama speed 45.0
length 1.2
cow speed 30.0
length 1.5
falcon speed 320.0
length 0.3
dtype: float64
| 343
| 1,264
|
remove points located within a specific area - python
I'm trying to remove points that are located within a specific area. Using below, I'm hoping to remove points that are located within the blue box. Ideally, I'd map out a polygon that followed the contour of the circle more closely. This is just a rough description.
I'm currently applying a crude subset to the y-coordinates:
df = df[df['A'] > 0]
df = df[df['C'] > 0]
While this gets the majority of points, I'm hoping to improve the method.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
df = pd.DataFrame(np.random.randint(-25,125,size=(500, 4)), columns=list('ABCD'))
fig, ax = plt.subplots()
ax.set_xlim(-50, 150)
ax.set_ylim(-50, 150)
x = ([1.25,0.5,-0.25,0.5,1.25,-10,-10,1.25])
y = ([75,62.5,50,37.5,25,25,75,75])
plt.plot(x,y)
A = df['A']
B = df['B']
C = df['C']
D = df['D']
plt.scatter(df['A'], df['B'], color = 'purple', alpha = 0.2);
plt.scatter(df['C'], df['D'], color = 'orange', alpha = 0.2);
Oval_patch = mpl.patches.Ellipse((50,50), 100, 150, color = 'k', fill = False)
ax.add_patch(Oval_patch)
|
67,142,487
|
how do i fix this ? TypeError: cannot unpack non-iterable AxesSubplot object
|
<pre><code>s = "sat2019|sat2020".split("|")
j = {x:[random.choice(["EWR","Math"]
)for j in range(300)] for x in s}
df = pd.DataFrame(j)
index = np.arange(5)
bar_width = 0.35
fig, ax = plt.subplot()
Sat2019 = ax.bar(index,df["sat2019"].value_counts(),bar_width, label="sat2019")
Sat2020 = ax.bar(index+bar_width,df["sat2020"].value_counts(),bar_width, label="sat2020")
ax.set_xlabel('category')
ax.set_ylabel('incidence')
ax.set_title("diff between years")
ax.set_xticks(index+bar_width/2)
ax.set_xticklabels(["ERW","Math"])
ax.legend()
plt.show()
</code></pre>
<p>the above is my code and below is my error how to do i fix this</p>
<p>fig, ax = plt.subplot()
TypeError: cannot unpack non-iterable AxesSubplot object
i get this how do i fix this err</p>
| 67,144,962
| 2021-04-17T20:05:09.597000
| 1
| null | 0
| 1,486
|
python|pandas
|
<p>The return value of <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplot.html" rel="nofollow noreferrer"><code>matplotlib.pyplot.subplot()</code></a> is just axes. What you want is <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots.html" rel="nofollow noreferrer"><code>matplotlib.pyplot.subplots()</code></a> which returns figure and axes.</p>
<p>After fixing this, you still have some issues in your code, such as <code>index</code> length and <code>df["sat2019"].value_counts()</code> length doesn't match etc.</p>
<p>I recommend plot using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><code>pandas.DataFrame.plot()</code></a></p>
<pre class="lang-py prettyprint-override"><code>import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
s = "sat2019|sat2020".split("|")
j = {x:[random.choice(["EWR","Math"]) for j in range(300)] for x in s}
df = pd.DataFrame(j)
ax = pd.concat([df["sat2019"].value_counts(), df["sat2020"].value_counts()], axis=1).plot.bar()
ax.set_xlabel('category')
ax.set_ylabel('incidence')
ax.set_title("diff between years")
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/vCgO6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vCgO6.png" alt="enter image description here" /></a></p>
<p>This is also suitable for the case when one of your <code>sat</code> column have extra types.</p>
<pre class="lang-py prettyprint-override"><code>df.loc[0, 'sat2019'] = 'test1'
df.loc[0, 'sat2020'] = 'test2'
ax = pd.concat([df["sat2019"].value_counts(), df["sat2020"].value_counts()], axis=1).plot.bar()
</code></pre>
<p><a href="https://i.stack.imgur.com/GsGil.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GsGil.png" alt="enter image description here" /></a></p>
| 2021-04-18T03:35:59.730000
| 1
|
https://pandas.pydata.org/docs/development/contributing_docstring.html
|
pandas docstring guide#
pandas docstring guide#
About docstrings and standards#
A Python docstring is a string used to document a Python module, class,
The return value of matplotlib.pyplot.subplot() is just axes. What you want is matplotlib.pyplot.subplots() which returns figure and axes.
After fixing this, you still have some issues in your code, such as index length and df["sat2019"].value_counts() length doesn't match etc.
I recommend plot using pandas.DataFrame.plot()
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
s = "sat2019|sat2020".split("|")
j = {x:[random.choice(["EWR","Math"]) for j in range(300)] for x in s}
df = pd.DataFrame(j)
ax = pd.concat([df["sat2019"].value_counts(), df["sat2020"].value_counts()], axis=1).plot.bar()
ax.set_xlabel('category')
ax.set_ylabel('incidence')
ax.set_title("diff between years")
plt.show()
This is also suitable for the case when one of your sat column have extra types.
df.loc[0, 'sat2019'] = 'test1'
df.loc[0, 'sat2020'] = 'test2'
ax = pd.concat([df["sat2019"].value_counts(), df["sat2020"].value_counts()], axis=1).plot.bar()
function or method, so programmers can understand what it does without having
to read the details of the implementation.
Also, it is a common practice to generate online (html) documentation
automatically from docstrings. Sphinx serves
this purpose.
The next example gives an idea of what a docstring looks like:
def add(num1, num2):
"""
Add up two integer numbers.
This function simply wraps the ``+`` operator, and does not
do anything interesting, except for illustrating what
the docstring of a very simple function looks like.
Parameters
----------
num1 : int
First number to add.
num2 : int
Second number to add.
Returns
-------
int
The sum of ``num1`` and ``num2``.
See Also
--------
subtract : Subtract one integer from another.
Examples
--------
>>> add(2, 2)
4
>>> add(25, 0)
25
>>> add(10, -10)
0
"""
return num1 + num2
Some standards regarding docstrings exist, which make them easier to read, and allow them
be easily exported to other formats such as html or pdf.
The first conventions every Python docstring should follow are defined in
PEP-257.
As PEP-257 is quite broad, other more specific standards also exist. In the
case of pandas, the NumPy docstring convention is followed. These conventions are
explained in this document:
numpydoc docstring guide
(which is based in the original Guide to NumPy/SciPy documentation)
numpydoc is a Sphinx extension to support the NumPy docstring convention.
The standard uses reStructuredText (reST). reStructuredText is a markup
language that allows encoding styles in plain text files. Documentation
about reStructuredText can be found in:
Sphinx reStructuredText primer
Quick reStructuredText reference
Full reStructuredText specification
pandas has some helpers for sharing docstrings between related classes, see
Sharing docstrings.
The rest of this document will summarize all the above guidelines, and will
provide additional conventions specific to the pandas project.
Writing a docstring#
General rules#
Docstrings must be defined with three double-quotes. No blank lines should be
left before or after the docstring. The text starts in the next line after the
opening quotes. The closing quotes have their own line
(meaning that they are not at the end of the last sentence).
On rare occasions reST styles like bold text or italics will be used in
docstrings, but is it common to have inline code, which is presented between
backticks. The following are considered inline code:
The name of a parameter
Python code, a module, function, built-in, type, literal… (e.g. os,
list, numpy.abs, datetime.date, True)
A pandas class (in the form :class:`pandas.Series`)
A pandas method (in the form :meth:`pandas.Series.sum`)
A pandas function (in the form :func:`pandas.to_datetime`)
Note
To display only the last component of the linked class, method or
function, prefix it with ~. For example, :class:`~pandas.Series`
will link to pandas.Series but only display the last part, Series
as the link text. See Sphinx cross-referencing syntax
for details.
Good:
def add_values(arr):
"""
Add the values in ``arr``.
This is equivalent to Python ``sum`` of :meth:`pandas.Series.sum`.
Some sections are omitted here for simplicity.
"""
return sum(arr)
Bad:
def func():
"""Some function.
With several mistakes in the docstring.
It has a blank like after the signature ``def func():``.
The text 'Some function' should go in the line after the
opening quotes of the docstring, not in the same line.
There is a blank line between the docstring and the first line
of code ``foo = 1``.
The closing quotes should be in the next line, not in this one."""
foo = 1
bar = 2
return foo + bar
Section 1: short summary#
The short summary is a single sentence that expresses what the function does in
a concise way.
The short summary must start with a capital letter, end with a dot, and fit in
a single line. It needs to express what the object does without providing
details. For functions and methods, the short summary must start with an
infinitive verb.
Good:
def astype(dtype):
"""
Cast Series type.
This section will provide further details.
"""
pass
Bad:
def astype(dtype):
"""
Casts Series type.
Verb in third-person of the present simple, should be infinitive.
"""
pass
def astype(dtype):
"""
Method to cast Series type.
Does not start with verb.
"""
pass
def astype(dtype):
"""
Cast Series type
Missing dot at the end.
"""
pass
def astype(dtype):
"""
Cast Series type from its current type to the new type defined in
the parameter dtype.
Summary is too verbose and doesn't fit in a single line.
"""
pass
Section 2: extended summary#
The extended summary provides details on what the function does. It should not
go into the details of the parameters, or discuss implementation notes, which
go in other sections.
A blank line is left between the short summary and the extended summary.
Every paragraph in the extended summary ends with a dot.
The extended summary should provide details on why the function is useful and
their use cases, if it is not too generic.
def unstack():
"""
Pivot a row index to columns.
When using a MultiIndex, a level can be pivoted so each value in
the index becomes a column. This is especially useful when a subindex
is repeated for the main index, and data is easier to visualize as a
pivot table.
The index level will be automatically removed from the index when added
as columns.
"""
pass
Section 3: parameters#
The details of the parameters will be added in this section. This section has
the title “Parameters”, followed by a line with a hyphen under each letter of
the word “Parameters”. A blank line is left before the section title, but not
after, and not between the line with the word “Parameters” and the one with
the hyphens.
After the title, each parameter in the signature must be documented, including
*args and **kwargs, but not self.
The parameters are defined by their name, followed by a space, a colon, another
space, and the type (or types). Note that the space between the name and the
colon is important. Types are not defined for *args and **kwargs, but must
be defined for all other parameters. After the parameter definition, it is
required to have a line with the parameter description, which is indented, and
can have multiple lines. The description must start with a capital letter, and
finish with a dot.
For keyword arguments with a default value, the default will be listed after a
comma at the end of the type. The exact form of the type in this case will be
“int, default 0”. In some cases it may be useful to explain what the default
argument means, which can be added after a comma “int, default -1, meaning all
cpus”.
In cases where the default value is None, meaning that the value will not be
used. Instead of "str, default None", it is preferred to write "str, optional".
When None is a value being used, we will keep the form “str, default None”.
For example, in df.to_csv(compression=None), None is not a value being used,
but means that compression is optional, and no compression is being used if not
provided. In this case we will use "str, optional". Only in cases like
func(value=None) and None is being used in the same way as 0 or foo
would be used, then we will specify “str, int or None, default None”.
Good:
class Series:
def plot(self, kind, color='blue', **kwargs):
"""
Generate a plot.
Render the data in the Series as a matplotlib plot of the
specified kind.
Parameters
----------
kind : str
Kind of matplotlib plot.
color : str, default 'blue'
Color name or rgb code.
**kwargs
These parameters will be passed to the matplotlib plotting
function.
"""
pass
Bad:
class Series:
def plot(self, kind, **kwargs):
"""
Generate a plot.
Render the data in the Series as a matplotlib plot of the
specified kind.
Note the blank line between the parameters title and the first
parameter. Also, note that after the name of the parameter ``kind``
and before the colon, a space is missing.
Also, note that the parameter descriptions do not start with a
capital letter, and do not finish with a dot.
Finally, the ``**kwargs`` parameter is missing.
Parameters
----------
kind: str
kind of matplotlib plot
"""
pass
Parameter types#
When specifying the parameter types, Python built-in data types can be used
directly (the Python type is preferred to the more verbose string, integer,
boolean, etc):
int
float
str
bool
For complex types, define the subtypes. For dict and tuple, as more than
one type is present, we use the brackets to help read the type (curly brackets
for dict and normal brackets for tuple):
list of int
dict of {str : int}
tuple of (str, int, int)
tuple of (str,)
set of str
In case where there are just a set of values allowed, list them in curly
brackets and separated by commas (followed by a space). If the values are
ordinal and they have an order, list them in this order. Otherwise, list
the default value first, if there is one:
{0, 10, 25}
{‘simple’, ‘advanced’}
{‘low’, ‘medium’, ‘high’}
{‘cat’, ‘dog’, ‘bird’}
If the type is defined in a Python module, the module must be specified:
datetime.date
datetime.datetime
decimal.Decimal
If the type is in a package, the module must be also specified:
numpy.ndarray
scipy.sparse.coo_matrix
If the type is a pandas type, also specify pandas except for Series and
DataFrame:
Series
DataFrame
pandas.Index
pandas.Categorical
pandas.arrays.SparseArray
If the exact type is not relevant, but must be compatible with a NumPy
array, array-like can be specified. If Any type that can be iterated is
accepted, iterable can be used:
array-like
iterable
If more than one type is accepted, separate them by commas, except the
last two types, that need to be separated by the word ‘or’:
int or float
float, decimal.Decimal or None
str or list of str
If None is one of the accepted values, it always needs to be the last in
the list.
For axis, the convention is to use something like:
axis : {0 or ‘index’, 1 or ‘columns’, None}, default None
Section 4: returns or yields#
If the method returns a value, it will be documented in this section. Also
if the method yields its output.
The title of the section will be defined in the same way as the “Parameters”.
With the names “Returns” or “Yields” followed by a line with as many hyphens
as the letters in the preceding word.
The documentation of the return is also similar to the parameters. But in this
case, no name will be provided, unless the method returns or yields more than
one value (a tuple of values).
The types for “Returns” and “Yields” are the same as the ones for the
“Parameters”. Also, the description must finish with a dot.
For example, with a single value:
def sample():
"""
Generate and return a random number.
The value is sampled from a continuous uniform distribution between
0 and 1.
Returns
-------
float
Random number generated.
"""
return np.random.random()
With more than one value:
import string
def random_letters():
"""
Generate and return a sequence of random letters.
The length of the returned string is also random, and is also
returned.
Returns
-------
length : int
Length of the returned string.
letters : str
String of random letters.
"""
length = np.random.randint(1, 10)
letters = ''.join(np.random.choice(string.ascii_lowercase)
for i in range(length))
return length, letters
If the method yields its value:
def sample_values():
"""
Generate an infinite sequence of random numbers.
The values are sampled from a continuous uniform distribution between
0 and 1.
Yields
------
float
Random number generated.
"""
while True:
yield np.random.random()
Section 5: see also#
This section is used to let users know about pandas functionality
related to the one being documented. In rare cases, if no related methods
or functions can be found at all, this section can be skipped.
An obvious example would be the head() and tail() methods. As tail() does
the equivalent as head() but at the end of the Series or DataFrame
instead of at the beginning, it is good to let the users know about it.
To give an intuition on what can be considered related, here there are some
examples:
loc and iloc, as they do the same, but in one case providing indices
and in the other positions
max and min, as they do the opposite
iterrows, itertuples and items, as it is easy that a user
looking for the method to iterate over columns ends up in the method to
iterate over rows, and vice-versa
fillna and dropna, as both methods are used to handle missing values
read_csv and to_csv, as they are complementary
merge and join, as one is a generalization of the other
astype and pandas.to_datetime, as users may be reading the
documentation of astype to know how to cast as a date, and the way to do
it is with pandas.to_datetime
where is related to numpy.where, as its functionality is based on it
When deciding what is related, you should mainly use your common sense and
think about what can be useful for the users reading the documentation,
especially the less experienced ones.
When relating to other libraries (mainly numpy), use the name of the module
first (not an alias like np). If the function is in a module which is not
the main one, like scipy.sparse, list the full module (e.g.
scipy.sparse.coo_matrix).
This section has a header, “See Also” (note the capital
S and A), followed by the line with hyphens and preceded by a blank line.
After the header, we will add a line for each related method or function,
followed by a space, a colon, another space, and a short description that
illustrates what this method or function does, why is it relevant in this
context, and what the key differences are between the documented function and
the one being referenced. The description must also end with a dot.
Note that in “Returns” and “Yields”, the description is located on the line
after the type. In this section, however, it is located on the same
line, with a colon in between. If the description does not fit on the same
line, it can continue onto other lines which must be further indented.
For example:
class Series:
def head(self):
"""
Return the first 5 elements of the Series.
This function is mainly useful to preview the values of the
Series without displaying the whole of it.
Returns
-------
Series
Subset of the original series with the 5 first values.
See Also
--------
Series.tail : Return the last 5 elements of the Series.
Series.iloc : Return a slice of the elements in the Series,
which can also be used to return the first or last n.
"""
return self.iloc[:5]
Section 6: notes#
This is an optional section used for notes about the implementation of the
algorithm, or to document technical aspects of the function behavior.
Feel free to skip it, unless you are familiar with the implementation of the
algorithm, or you discover some counter-intuitive behavior while writing the
examples for the function.
This section follows the same format as the extended summary section.
Section 7: examples#
This is one of the most important sections of a docstring, despite being
placed in the last position, as often people understand concepts better
by example than through accurate explanations.
Examples in docstrings, besides illustrating the usage of the function or
method, must be valid Python code, that returns the given output in a
deterministic way, and that can be copied and run by users.
Examples are presented as a session in the Python terminal. >>> is used to
present code. ... is used for code continuing from the previous line.
Output is presented immediately after the last line of code generating the
output (no blank lines in between). Comments describing the examples can
be added with blank lines before and after them.
The way to present examples is as follows:
Import required libraries (except numpy and pandas)
Create the data required for the example
Show a very basic example that gives an idea of the most common use case
Add examples with explanations that illustrate how the parameters can be
used for extended functionality
A simple example could be:
class Series:
def head(self, n=5):
"""
Return the first elements of the Series.
This function is mainly useful to preview the values of the
Series without displaying all of it.
Parameters
----------
n : int
Number of values to return.
Return
------
pandas.Series
Subset of the original series with the n first values.
See Also
--------
tail : Return the last n elements of the Series.
Examples
--------
>>> s = pd.Series(['Ant', 'Bear', 'Cow', 'Dog', 'Falcon',
... 'Lion', 'Monkey', 'Rabbit', 'Zebra'])
>>> s.head()
0 Ant
1 Bear
2 Cow
3 Dog
4 Falcon
dtype: object
With the ``n`` parameter, we can change the number of returned rows:
>>> s.head(n=3)
0 Ant
1 Bear
2 Cow
dtype: object
"""
return self.iloc[:n]
The examples should be as concise as possible. In cases where the complexity of
the function requires long examples, is recommended to use blocks with headers
in bold. Use double star ** to make a text bold, like in **this example**.
Conventions for the examples#
Code in examples is assumed to always start with these two lines which are not
shown:
import numpy as np
import pandas as pd
Any other module used in the examples must be explicitly imported, one per line (as
recommended in PEP 8#imports)
and avoiding aliases. Avoid excessive imports, but if needed, imports from
the standard library go first, followed by third-party libraries (like
matplotlib).
When illustrating examples with a single Series use the name s, and if
illustrating with a single DataFrame use the name df. For indices,
idx is the preferred name. If a set of homogeneous Series or
DataFrame is used, name them s1, s2, s3… or df1,
df2, df3… If the data is not homogeneous, and more than one structure
is needed, name them with something meaningful, for example df_main and
df_to_join.
Data used in the example should be as compact as possible. The number of rows
is recommended to be around 4, but make it a number that makes sense for the
specific example. For example in the head method, it requires to be higher
than 5, to show the example with the default values. If doing the mean, we
could use something like [1, 2, 3], so it is easy to see that the value
returned is the mean.
For more complex examples (grouping for example), avoid using data without
interpretation, like a matrix of random numbers with columns A, B, C, D…
And instead use a meaningful example, which makes it easier to understand the
concept. Unless required by the example, use names of animals, to keep examples
consistent. And numerical properties of them.
When calling the method, keywords arguments head(n=3) are preferred to
positional arguments head(3).
Good:
class Series:
def mean(self):
"""
Compute the mean of the input.
Examples
--------
>>> s = pd.Series([1, 2, 3])
>>> s.mean()
2
"""
pass
def fillna(self, value):
"""
Replace missing values by ``value``.
Examples
--------
>>> s = pd.Series([1, np.nan, 3])
>>> s.fillna(0)
[1, 0, 3]
"""
pass
def groupby_mean(self):
"""
Group by index and return mean.
Examples
--------
>>> s = pd.Series([380., 370., 24., 26],
... name='max_speed',
... index=['falcon', 'falcon', 'parrot', 'parrot'])
>>> s.groupby_mean()
index
falcon 375.0
parrot 25.0
Name: max_speed, dtype: float64
"""
pass
def contains(self, pattern, case_sensitive=True, na=numpy.nan):
"""
Return whether each value contains ``pattern``.
In this case, we are illustrating how to use sections, even
if the example is simple enough and does not require them.
Examples
--------
>>> s = pd.Series('Antelope', 'Lion', 'Zebra', np.nan)
>>> s.contains(pattern='a')
0 False
1 False
2 True
3 NaN
dtype: bool
**Case sensitivity**
With ``case_sensitive`` set to ``False`` we can match ``a`` with both
``a`` and ``A``:
>>> s.contains(pattern='a', case_sensitive=False)
0 True
1 False
2 True
3 NaN
dtype: bool
**Missing values**
We can fill missing values in the output using the ``na`` parameter:
>>> s.contains(pattern='a', na=False)
0 False
1 False
2 True
3 False
dtype: bool
"""
pass
Bad:
def method(foo=None, bar=None):
"""
A sample DataFrame method.
Do not import NumPy and pandas.
Try to use meaningful data, when it makes the example easier
to understand.
Try to avoid positional arguments like in ``df.method(1)``. They
can be all right if previously defined with a meaningful name,
like in ``present_value(interest_rate)``, but avoid them otherwise.
When presenting the behavior with different parameters, do not place
all the calls one next to the other. Instead, add a short sentence
explaining what the example shows.
Examples
--------
>>> import numpy as np
>>> import pandas as pd
>>> df = pd.DataFrame(np.random.randn(3, 3),
... columns=('a', 'b', 'c'))
>>> df.method(1)
21
>>> df.method(bar=14)
123
"""
pass
Tips for getting your examples pass the doctests#
Getting the examples pass the doctests in the validation script can sometimes
be tricky. Here are some attention points:
Import all needed libraries (except for pandas and NumPy, those are already
imported as import pandas as pd and import numpy as np) and define
all variables you use in the example.
Try to avoid using random data. However random data might be OK in some
cases, like if the function you are documenting deals with probability
distributions, or if the amount of data needed to make the function result
meaningful is too much, such that creating it manually is very cumbersome.
In those cases, always use a fixed random seed to make the generated examples
predictable. Example:
>>> np.random.seed(42)
>>> df = pd.DataFrame({'normal': np.random.normal(100, 5, 20)})
If you have a code snippet that wraps multiple lines, you need to use ‘…’
on the continued lines:
>>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], index=['a', 'b', 'c'],
... columns=['A', 'B'])
If you want to show a case where an exception is raised, you can do:
>>> pd.to_datetime(["712-01-01"])
Traceback (most recent call last):
OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 712-01-01 00:00:00
It is essential to include the “Traceback (most recent call last):”, but for
the actual error only the error name is sufficient.
If there is a small part of the result that can vary (e.g. a hash in an object
representation), you can use ... to represent this part.
If you want to show that s.plot() returns a matplotlib AxesSubplot object,
this will fail the doctest
>>> s.plot()
<matplotlib.axes._subplots.AxesSubplot at 0x7efd0c0b0690>
However, you can do (notice the comment that needs to be added)
>>> s.plot()
<matplotlib.axes._subplots.AxesSubplot at ...>
Plots in examples#
There are some methods in pandas returning plots. To render the plots generated
by the examples in the documentation, the .. plot:: directive exists.
To use it, place the next code after the “Examples” header as shown below. The
plot will be generated automatically when building the documentation.
class Series:
def plot(self):
"""
Generate a plot with the ``Series`` data.
Examples
--------
.. plot::
:context: close-figs
>>> s = pd.Series([1, 2, 3])
>>> s.plot()
"""
pass
Sharing docstrings#
pandas has a system for sharing docstrings, with slight variations, between
classes. This helps us keep docstrings consistent, while keeping things clear
for the user reading. It comes at the cost of some complexity when writing.
Each shared docstring will have a base template with variables, like
{klass}. The variables filled in later on using the doc decorator.
Finally, docstrings can also be appended to with the doc decorator.
In this example, we’ll create a parent docstring normally (this is like
pandas.core.generic.NDFrame. Then we’ll have two children (like
pandas.core.series.Series and pandas.core.frame.DataFrame). We’ll
substitute the class names in this docstring.
class Parent:
@doc(klass="Parent")
def my_function(self):
"""Apply my function to {klass}."""
...
class ChildA(Parent):
@doc(Parent.my_function, klass="ChildA")
def my_function(self):
...
class ChildB(Parent):
@doc(Parent.my_function, klass="ChildB")
def my_function(self):
...
The resulting docstrings are
>>> print(Parent.my_function.__doc__)
Apply my function to Parent.
>>> print(ChildA.my_function.__doc__)
Apply my function to ChildA.
>>> print(ChildB.my_function.__doc__)
Apply my function to ChildB.
Notice:
We “append” the parent docstring to the children docstrings, which are
initially empty.
Our files will often contain a module-level _shared_doc_kwargs with some
common substitution values (things like klass, axes, etc).
You can substitute and append in one shot with something like
@doc(template, **_shared_doc_kwargs)
def my_function(self):
...
where template may come from a module-level _shared_docs dictionary
mapping function names to docstrings. Wherever possible, we prefer using
doc, since the docstring-writing processes is slightly closer to normal.
See pandas.core.generic.NDFrame.fillna for an example template, and
pandas.core.series.Series.fillna and pandas.core.generic.frame.fillna
for the filled versions.
| 155
| 1,135
|
how do i fix this ? TypeError: cannot unpack non-iterable AxesSubplot object
s = "sat2019|sat2020".split("|")
j = {x:[random.choice(["EWR","Math"]
)for j in range(300)] for x in s}
df = pd.DataFrame(j)
index = np.arange(5)
bar_width = 0.35
fig, ax = plt.subplot()
Sat2019 = ax.bar(index,df["sat2019"].value_counts(),bar_width, label="sat2019")
Sat2020 = ax.bar(index+bar_width,df["sat2020"].value_counts(),bar_width, label="sat2020")
ax.set_xlabel('category')
ax.set_ylabel('incidence')
ax.set_title("diff between years")
ax.set_xticks(index+bar_width/2)
ax.set_xticklabels(["ERW","Math"])
ax.legend()
plt.show()
the above is my code and below is my error how to do i fix this
fig, ax = plt.subplot()
TypeError: cannot unpack non-iterable AxesSubplot object
i get this how do i fix this err
|
62,533,488
|
How do I convert a non zero padded day string to a useful date in pandas
|
<p>I'm trying to import a date string with non-zero padded day, zero padded month, and year without century to create a datetime e.g. (11219 to 01/12/19). However, pandas cannot distinguish between the day and the month (e.g. 11219 could be 11th February, 2019 or 1st December, 2019).</p>
<p>I've tried using 'dayfirst' and the '#' in the day e.g. %#d, but nothing works. Code below, any advise?</p>
<p><strong>Code:</strong></p>
<pre><code>df_import['newDate'] = pd.to_datetime(df_import['Date'], format='%d/%m/%Y', dayfirst = True)
</code></pre>
<p><strong>Error:</strong></p>
<blockquote>
<p>time data '11219' does not match format '%d/%m/%Y' (match)</p>
</blockquote>
| 62,534,095
| 2020-06-23T11:17:57.107000
| 2
| null | 2
| 469
|
pandas
|
<p>Since only the <em>day</em> is not zero-padded, the dates are unambiguous. They can simply be parsed by Pandas if we add the pad:</p>
<pre><code>pd.to_datetime(df_import['Date'].str.zfill(6), format='%d%m%y')
</code></pre>
| 2020-06-23T11:54:55.740000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.str.zfill.html
|
pandas.Series.str.zfill#
pandas.Series.str.zfill#
Series.str.zfill(width)[source]#
Pad strings in the Series/Index by prepending ‘0’ characters.
Strings in the Series/Index are padded with ‘0’ characters on the
Since only the day is not zero-padded, the dates are unambiguous. They can simply be parsed by Pandas if we add the pad:
pd.to_datetime(df_import['Date'].str.zfill(6), format='%d%m%y')
left of the string to reach a total string length width. Strings
in the Series/Index with length greater or equal to width are
unchanged.
Parameters
widthintMinimum length of resulting string; strings with length less
than width be prepended with ‘0’ characters.
Returns
Series/Index of objects.
See also
Series.str.rjustFills the left side of strings with an arbitrary character.
Series.str.ljustFills the right side of strings with an arbitrary character.
Series.str.padFills the specified sides of strings with an arbitrary character.
Series.str.centerFills both sides of strings with an arbitrary character.
Notes
Differs from str.zfill() which has special handling
for ‘+’/’-’ in the string.
Examples
>>> s = pd.Series(['-1', '1', '1000', 10, np.nan])
>>> s
0 -1
1 1
2 1000
3 10
4 NaN
dtype: object
Note that 10 and NaN are not strings, therefore they are
converted to NaN. The minus sign in '-1' is treated as a
special character and the zero is added to the right of it
(str.zfill() would have moved it to the left). 1000
remains unchanged as it is longer than width.
>>> s.str.zfill(3)
0 -01
1 001
2 1000
3 NaN
4 NaN
dtype: object
| 215
| 400
|
How do I convert a non zero padded day string to a useful date in pandas
I'm trying to import a date string with non-zero padded day, zero padded month, and year without century to create a datetime e.g. (11219 to 01/12/19). However, pandas cannot distinguish between the day and the month (e.g. 11219 could be 11th February, 2019 or 1st December, 2019).
I've tried using 'dayfirst' and the '#' in the day e.g. %#d, but nothing works. Code below, any advise?
Code:
df_import['newDate'] = pd.to_datetime(df_import['Date'], format='%d/%m/%Y', dayfirst = True)
Error:
time data '11219' does not match format '%d/%m/%Y' (match)
|
61,497,798
|
Can not read csv file, pandas parser error
|
<p>I am getting parser error from pandas lib...not sure what could be the issue.</p>
<pre><code>Traceback (most recent call last):
File "C:/2020/python-nifi/test.py", line 4, in <module>
df = pd.read_csv("C:\\2020\\test\\sum.csv", '\t')
File "C:\2020\python-nifi\venv\lib\site-packages\pandas\io\parsers.py", line 676, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\2020\python-nifi\venv\lib\site-packages\pandas\io\parsers.py", line 454, in _read
data = parser.read(nrows)
File "C:\2020\python-nifi\venv\lib\site-packages\pandas\io\parsers.py", line 1133, in read
ret = self._engine.read(nrows)
File "C:\2020\python-nifi\venv\lib\site-packages\pandas\io\parsers.py", line 2037, in read
data = self._reader.read(nrows)
File "pandas\_libs\parsers.pyx", line 860, in pandas._libs.parsers.TextReader.read
File "pandas\_libs\parsers.pyx", line 875, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas\_libs\parsers.pyx", line 929, in pandas._libs.parsers.TextReader._read_rows
File "pandas\_libs\parsers.pyx", line 916, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas\_libs\parsers.pyx", line 2071, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 5, saw 4
import pandas as pd
df = pd.read_csv("C:\\2020\\test\\sum.csv", sep='\t')
print(df)
</code></pre>
<p>file trying to read is ...</p>
<p><a href="https://i.stack.imgur.com/aXiOa.png" rel="nofollow noreferrer">enter image description here</a></p>
| 61,498,930
| 2020-04-29T08:52:58.323000
| 2
| null | 0
| 1,495
|
python|pandas
|
<p>And if you use <code>df = pd.read_csv("filename", sep='[:,|_]',engine='python' )</code> ?
As you can use multiple seperators on import.</p>
| 2020-04-29T09:49:57.710000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html
|
pandas.read_csv#
pandas.read_csv#
pandas.read_csv(filepath_or_buffer, *, sep=_NoDefault.no_default, delimiter=None, header='infer', names=_NoDefault.no_default, index_col=None, usecols=None, squeeze=None, prefix=_NoDefault.no_default, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=None, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal='.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, encoding_errors='strict', dialect=None, error_bad_lines=None, warn_bad_lines=None, on_bad_lines=None, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None, storage_options=None)[source]#
And if you use df = pd.read_csv("filename", sep='[:,|_]',engine='python' ) ?
As you can use multiple seperators on import.
Read a comma-separated values (csv) file into DataFrame.
Also supports optionally iterating or breaking of the file
into chunks.
Additional help can be found in the online docs for
IO Tools.
Parameters
filepath_or_bufferstr, path object or file-like objectAny valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.csv.
If you want to pass in a path object, pandas accepts any os.PathLike.
By file-like object, we refer to objects with a read() method, such as
a file handle (e.g. via builtin open function) or StringIO.
sepstr, default ‘,’Delimiter to use. If sep is None, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will
be used and automatically detect the separator by Python’s builtin sniffer
tool, csv.Sniffer. In addition, separators longer than 1 character and
different from '\s+' will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: '\r\t'.
delimiterstr, default NoneAlias for sep.
headerint, list of int, None, default ‘infer’Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names
are passed the behavior is identical to header=0 and column
names are inferred from the first line of the file, if column
names are passed explicitly then the behavior is identical to
header=None. Explicitly pass header=0 to be able to
replace existing names. The header can be a list of integers that
specify row locations for a multi-index on the columns
e.g. [0,1,3]. Intervening rows that are not specified will be
skipped (e.g. 2 in this example is skipped). Note that this
parameter ignores commented lines and empty lines if
skip_blank_lines=True, so header=0 denotes the first line of
data rather than the first line of the file.
namesarray-like, optionalList of column names to use. If the file contains a header row,
then you should explicitly pass header=0 to override the column names.
Duplicates in this list are not allowed.
index_colint, str, sequence of int / str, or False, optional, default NoneColumn(s) to use as the row labels of the DataFrame, either given as
string name or column index. If a sequence of int / str is given, a
MultiIndex is used.
Note: index_col=False can be used to force pandas to not use the first
column as the index, e.g. when you have a malformed file with delimiters at
the end of each line.
usecolslist-like or callable, optionalReturn a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in names or
inferred from the document header row(s). If names are given, the document
header row(s) are not taken into account. For example, a valid list-like
usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
Element order is ignored, so usecols=[0, 1] is the same as [1, 0].
To instantiate a DataFrame from data with element order preserved use
pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns
in ['foo', 'bar'] order or
pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]
for ['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column
names, returning names where the callable function evaluates to True. An
example of a valid callable argument would be lambda x: x.upper() in
['AAA', 'BBB', 'DDD']. Using this parameter results in much faster
parsing time and lower memory usage.
squeezebool, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to read_csv to squeeze
the data.
prefixstr, optionalPrefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
mangle_dupe_colsbool, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than
‘X’…’X’. Passing in False will cause data to be overwritten if there
are duplicate names in the columns.
Deprecated since version 1.5.0: Not implemented, and a new argument to specify the pattern for the
names of duplicated columns will be added instead
dtypeType name or dict of column -> type, optionalData type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32,
‘c’: ‘Int64’}
Use str or object together with suitable na_values settings
to preserve and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
New in version 1.5.0: Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.
engine{‘c’, ‘python’, ‘pyarrow’}, optionalParser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.
New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features
are unsupported, or may not work correctly, with this engine.
convertersdict, optionalDict of functions for converting values in certain columns. Keys can either
be integers or column labels.
true_valueslist, optionalValues to consider as True.
false_valueslist, optionalValues to consider as False.
skipinitialspacebool, default FalseSkip spaces after delimiter.
skiprowslist-like, int or callable, optionalLine numbers to skip (0-indexed) or number of lines to skip (int)
at the start of the file.
If callable, the callable function will be evaluated against the row
indices, returning True if the row should be skipped and False otherwise.
An example of a valid callable argument would be lambda x: x in [0, 2].
skipfooterint, default 0Number of lines at bottom of file to skip (Unsupported with engine=’c’).
nrowsint, optionalNumber of rows of file to read. Useful for reading pieces of large files.
na_valuesscalar, str, list-like, or dict, optionalAdditional strings to recognize as NA/NaN. If dict passed, specific
per-column NA values. By default the following values are interpreted as
NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’,
‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’,
‘nan’, ‘null’.
keep_default_nabool, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterbool, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verbosebool, default FalseIndicate number of NA values placed in non-numeric columns.
skip_blank_linesbool, default TrueIf True, skip over blank lines rather than interpreting as NaN values.
parse_datesbool or list of int or names or list of lists or dict, default FalseThe behavior is as follows:
boolean. If True -> try parsing the index.
list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
each as a separate date column.
list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as
a single date column.
dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call
result ‘foo’
If a column or index cannot be represented as an array of datetimes,
say because of an unparsable value or a mixture of timezones, the column
or index will be returned unaltered as an object data type. For
non-standard datetime parsing, use pd.to_datetime after
pd.read_csv. To parse an index or column with a mixture of timezones,
specify date_parser to be a partially-applied
pandas.to_datetime() with utc=True. See
Parsing a CSV with mixed timezones for more.
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_formatbool, default FalseIf True and parse_dates is enabled, pandas will attempt to infer the
format of the datetime strings in the columns, and if it can be inferred,
switch to a faster method of parsing them. In some cases this can increase
the parsing speed by 5-10x.
keep_date_colbool, default FalseIf True and parse_dates specifies combining multiple columns then
keep the original columns.
date_parserfunction, optionalFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. Pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by parse_dates) as arguments; 2) concatenate (row-wise) the
string values from the columns defined by parse_dates into a single array
and pass that; and 3) call date_parser once for each row using one or
more strings (corresponding to the columns defined by parse_dates) as
arguments.
dayfirstbool, default FalseDD/MM format dates, international and European format.
cache_datesbool, default TrueIf True, use a cache of unique, converted dates to apply the datetime
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
New in version 0.25.0.
iteratorbool, default FalseReturn TextFileReader object for iteration or getting chunks with
get_chunk().
Changed in version 1.2: TextFileReader is a context manager.
chunksizeint, optionalReturn TextFileReader object for iteration.
See the IO Tools docs
for more information on iterator and chunksize.
Changed in version 1.2: TextFileReader is a context manager.
compressionstr or dict, default ‘infer’For on-the-fly decompression of on-disk data. If ‘infer’ and ‘filepath_or_buffer’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in.
Set to None for no decompression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdDecompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for Zstandard decompression using a
custom compression dictionary:
compression={'method': 'zstd', 'dict_data': my_compression_dict}.
New in version 1.5.0: Added support for .tar files.
Changed in version 1.4.0: Zstandard support.
thousandsstr, optionalThousands separator.
decimalstr, default ‘.’Character to recognize as decimal point (e.g. use ‘,’ for European data).
lineterminatorstr (length 1), optionalCharacter to break file into lines. Only valid with C parser.
quotecharstr (length 1), optionalThe character used to denote the start and end of a quoted item. Quoted
items can include the delimiter and it will be ignored.
quotingint or csv.QUOTE_* instance, default 0Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
doublequotebool, default TrueWhen quotechar is specified and quoting is not QUOTE_NONE, indicate
whether or not to interpret two consecutive quotechar elements INSIDE a
field as a single quotechar element.
escapecharstr (length 1), optionalOne-character string used to escape other characters.
commentstr, optionalIndicates remainder of line should not be parsed. If found at the beginning
of a line, the line will be ignored altogether. This parameter must be a
single character. Like empty lines (as long as skip_blank_lines=True),
fully commented lines are ignored by the parameter header but not by
skiprows. For example, if comment='#', parsing
#empty\na,b,c\n1,2,3 with header=0 will result in ‘a,b,c’ being
treated as the header.
encodingstr, optionalEncoding to use for UTF when reading/writing (ex. ‘utf-8’). List of Python
standard encodings .
Changed in version 1.2: When encoding is None, errors="replace" is passed to
open(). Otherwise, errors="strict" is passed to open().
This behavior was previously only the case for engine="python".
Changed in version 1.3.0: encoding_errors is a new argument. encoding has no longer an
influence on how encoding errors are handled.
encoding_errorsstr, optional, default “strict”How encoding errors are treated. List of possible values .
New in version 1.3.0.
dialectstr or csv.Dialect, optionalIf provided, this parameter will override values (default or not) for the
following parameters: delimiter, doublequote, escapechar,
skipinitialspace, quotechar, and quoting. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
error_bad_linesbool, optional, default NoneLines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned.
If False, then these “bad lines” will be dropped from the DataFrame that is
returned.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
warn_bad_linesbool, optional, default NoneIf error_bad_lines is False, and warn_bad_lines is True, a warning for each
“bad line” will be output.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
on_bad_lines{‘error’, ‘warn’, ‘skip’} or callable, default ‘error’Specifies what to do upon encountering a bad line (a line with too many fields).
Allowed values are :
‘error’, raise an Exception when a bad line is encountered.
‘warn’, raise a warning when a bad line is encountered and skip that line.
‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0.
New in version 1.4.0:
callable, function with signature
(bad_line: list[str]) -> list[str] | None that will process a single
bad line. bad_line is a list of strings split by the sep.
If the function returns None, the bad line will be ignored.
If the function returns a new list of strings with more elements than
expected, a ParserWarning will be emitted while dropping extra elements.
Only supported when engine="python"
delim_whitespacebool, default FalseSpecifies whether or not whitespace (e.g. ' ' or ' ') will be
used as the sep. Equivalent to setting sep='\s+'. If this option
is set to True, nothing should be passed in for the delimiter
parameter.
low_memorybool, default TrueInternally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
Note that the entire file is read into a single DataFrame regardless,
use the chunksize or iterator parameter to return the data in chunks.
(Only valid with C parser).
memory_mapbool, default FalseIf a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this
option can improve performance because there is no longer any I/O overhead.
float_precisionstr, optionalSpecifies which converter the C engine should use for floating-point
values. The options are None or ‘high’ for the ordinary converter,
‘legacy’ for the original lower precision pandas converter, and
‘round_trip’ for the round-trip converter.
Changed in version 1.2.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.
Returns
DataFrame or TextParserA comma-separated values (csv) file is returned as two-dimensional
data structure with labeled axes.
See also
DataFrame.to_csvWrite DataFrame to a comma-separated values (csv) file.
read_csvRead a comma-separated values (csv) file into DataFrame.
read_fwfRead a table of fixed-width formatted lines into DataFrame.
Examples
>>> pd.read_csv('data.csv')
| 1,025
| 1,147
|
Can not read csv file, pandas parser error
I am getting parser error from pandas lib...not sure what could be the issue.
Traceback (most recent call last):
File "C:/2020/python-nifi/test.py", line 4, in <module>
df = pd.read_csv("C:\\2020\\test\\sum.csv", '\t')
File "C:\2020\python-nifi\venv\lib\site-packages\pandas\io\parsers.py", line 676, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\2020\python-nifi\venv\lib\site-packages\pandas\io\parsers.py", line 454, in _read
data = parser.read(nrows)
File "C:\2020\python-nifi\venv\lib\site-packages\pandas\io\parsers.py", line 1133, in read
ret = self._engine.read(nrows)
File "C:\2020\python-nifi\venv\lib\site-packages\pandas\io\parsers.py", line 2037, in read
data = self._reader.read(nrows)
File "pandas\_libs\parsers.pyx", line 860, in pandas._libs.parsers.TextReader.read
File "pandas\_libs\parsers.pyx", line 875, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas\_libs\parsers.pyx", line 929, in pandas._libs.parsers.TextReader._read_rows
File "pandas\_libs\parsers.pyx", line 916, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas\_libs\parsers.pyx", line 2071, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 5, saw 4
import pandas as pd
df = pd.read_csv("C:\\2020\\test\\sum.csv", sep='\t')
print(df)
file trying to read is ...
enter image description here
|
67,605,527
|
pandas rolling get most frequent value in window
|
<p>I was wondering how one could obtain the most frequent value of in a rolling window of a time series, i.e. something like</p>
<pre><code>df.rolling(window = 2).rolling.count_values().idxmax()
</code></pre>
<p>Since for the column of a data frame, one can use</p>
<pre><code>df["value"].count_values().idxmax()
</code></pre>
<p>However, I only get an error message reading <code> 'Rolling' object has no attribute 'count_values'</code></p>
| 67,605,730
| 2021-05-19T14:44:11.413000
| 1
| null | 1
| 494
|
python|pandas
|
<p>Use <code>rolling().apply</code> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mode.html#pandas.Series.mode" rel="nofollow noreferrer">mode</a> method:</p>
<pre class="lang-py prettyprint-override"><code>df.rolling(window = 3).apply(lambda x: x.mode()[0])
</code></pre>
<p><code>x</code> will be a <code>pd.Series</code> object of length equal to window. <code>mode</code> method will return a list of most frequent values. <code>lambda</code> will return the first item from the list of modes.</p>
| 2021-05-19T14:56:48.457000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.rolling.html
|
pandas.Series.rolling#
pandas.Series.rolling#
Use rolling().apply with mode method:
df.rolling(window = 3).apply(lambda x: x.mode()[0])
x will be a pd.Series object of length equal to window. mode method will return a list of most frequent values. lambda will return the first item from the list of modes.
Series.rolling(window, min_periods=None, center=False, win_type=None, on=None, axis=0, closed=None, step=None, method='single')[source]#
Provide rolling window calculations.
Parameters
windowint, offset, or BaseIndexer subclassSize of the moving window.
If an integer, the fixed number of observations used for
each window.
If an offset, the time period of each window. Each
window will be a variable sized based on the observations included in
the time-period. This is only valid for datetimelike indexes.
To learn more about the offsets & frequency strings, please see this link.
If a BaseIndexer subclass, the window boundaries
based on the defined get_window_bounds method. Additional rolling
keyword arguments, namely min_periods, center, closed and
step will be passed to get_window_bounds.
min_periodsint, default NoneMinimum number of observations in window required to have a value;
otherwise, result is np.nan.
For a window that is specified by an offset, min_periods will default to 1.
For a window that is specified by an integer, min_periods will default
to the size of the window.
centerbool, default FalseIf False, set the window labels as the right edge of the window index.
If True, set the window labels as the center of the window index.
win_typestr, default NoneIf None, all points are evenly weighted.
If a string, it must be a valid scipy.signal window function.
Certain Scipy window types require additional parameters to be passed
in the aggregation function. The additional parameters must match
the keywords specified in the Scipy window type method signature.
onstr, optionalFor a DataFrame, a column label or Index level on which
to calculate the rolling window, rather than the DataFrame’s index.
Provided integer column is ignored and excluded from result since
an integer index is not used to calculate the rolling window.
axisint or str, default 0If 0 or 'index', roll across the rows.
If 1 or 'columns', roll across the columns.
For Series this parameter is unused and defaults to 0.
closedstr, default NoneIf 'right', the first point in the window is excluded from calculations.
If 'left', the last point in the window is excluded from calculations.
If 'both', the no points in the window are excluded from calculations.
If 'neither', the first and last points in the window are excluded
from calculations.
Default None ('right').
Changed in version 1.2.0: The closed parameter with fixed windows is now supported.
stepint, default None
New in version 1.5.0.
Evaluate the window at every step result, equivalent to slicing as
[::step]. window must be an integer. Using a step argument other
than None or 1 will produce a result with a different shape than the input.
methodstr {‘single’, ‘table’}, default ‘single’
New in version 1.3.0.
Execute the rolling operation per single column or row ('single')
or over the entire object ('table').
This argument is only implemented when specifying engine='numba'
in the method call.
Returns
Window subclass if a win_type is passed
Rolling subclass if win_type is not passed
See also
expandingProvides expanding transformations.
ewmProvides exponential weighted functions.
Notes
See Windowing Operations for further usage details
and examples.
Examples
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
window
Rolling sum with a window length of 2 observations.
>>> df.rolling(2).sum()
B
0 NaN
1 1.0
2 3.0
3 NaN
4 NaN
Rolling sum with a window span of 2 seconds.
>>> df_time = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]},
... index = [pd.Timestamp('20130101 09:00:00'),
... pd.Timestamp('20130101 09:00:02'),
... pd.Timestamp('20130101 09:00:03'),
... pd.Timestamp('20130101 09:00:05'),
... pd.Timestamp('20130101 09:00:06')])
>>> df_time
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
>>> df_time.rolling('2s').sum()
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
Rolling sum with forward looking windows with 2 observations.
>>> indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=2)
>>> df.rolling(window=indexer, min_periods=1).sum()
B
0 1.0
1 3.0
2 2.0
3 4.0
4 4.0
min_periods
Rolling sum with a window length of 2 observations, but only needs a minimum of 1
observation to calculate a value.
>>> df.rolling(2, min_periods=1).sum()
B
0 0.0
1 1.0
2 3.0
3 2.0
4 4.0
center
Rolling sum with the result assigned to the center of the window index.
>>> df.rolling(3, min_periods=1, center=True).sum()
B
0 1.0
1 3.0
2 3.0
3 6.0
4 4.0
>>> df.rolling(3, min_periods=1, center=False).sum()
B
0 0.0
1 1.0
2 3.0
3 3.0
4 6.0
step
Rolling sum with a window length of 2 observations, minimum of 1 observation to
calculate a value, and a step of 2.
>>> df.rolling(2, min_periods=1, step=2).sum()
B
0 0.0
2 3.0
4 4.0
win_type
Rolling sum with a window length of 2, using the Scipy 'gaussian'
window type. std is required in the aggregation function.
>>> df.rolling(2, win_type='gaussian').sum(std=3)
B
0 NaN
1 0.986207
2 2.958621
3 NaN
4 NaN
| 48
| 308
|
pandas rolling get most frequent value in window
I was wondering how one could obtain the most frequent value of in a rolling window of a time series, i.e. something like
df.rolling(window = 2).rolling.count_values().idxmax()
Since for the column of a data frame, one can use
df["value"].count_values().idxmax()
However, I only get an error message reading 'Rolling' object has no attribute 'count_values'
|
69,548,953
|
Groupby Pandas , calculate multiple columns based on date difference
|
<p>I have a pandas dataframe shown below:</p>
<pre><code>CID RefID Date Group MID
100 1 1/01/2021 A
100 2 3/01/2021 A
100 3 4/01/2021 A 101
100 4 15/01/2021 A
100 5 18/01/2021 A
200 6 3/03/2021 B
200 7 4/04/2021 B
200 8 9/04/2021 B 102
200 9 25/04/2021 B
300 10 26/04/2021 C
300 11 27/05/2021 C
300 12 28/05/2021 C 103
</code></pre>
<p>I want to create three columns:</p>
<p>days_diff:</p>
<ol>
<li><p>This has to be created in a way that if the difference b/w the first Date and corresponding rows is greater than 30 belonging to the same CID then assign 'NAT' or 0 to the next row (reset) and then subtract the date with this row for the following values</p>
</li>
<li><p>If MIDis not null and belong to same CID group assign 'NAT' or 0 to the next row (reset)
and then subtract the date with this row for the following values</p>
</li>
</ol>
<p>Otherwise just fetch the date difference b/w the first row belonging to the same CID for the corresponding rows</p>
<p>A:
This depends on the days_diff column , this column is like a counter it will only change/increment when there's another NAT occurrence for the same CID and reset itself for every CID.</p>
<p>B: This column depends on the column A , if the value in A remains same it won't change otherwise increments</p>
<p>It's a bit complicated to explain please refer to the output below for reference. I have used <code>.groupby()</code> <code>.diff()</code> and <code> .shift()</code> methods to create multiple dummy columns in order to calculate this and still working on it, please let me know the best way to go about this, thanks</p>
<p>My expected output :</p>
<pre><code>CID RefID Date Group MID days_diff A B
100 1 1/01/2021 A NAT 1 1
100 2 3/01/2021 A 2 days 1 1
100 3 4/01/2021 A 101 3 days 1 1
100 4 15/01/2021 A NAT 2 4
100 5 18/01/2021 A 3 days 2 4
200 6 3/03/2021 B NAT 1 6
200 7 4/04/2021 B NAT 2 7
200 8 9/04/2021 B 102 5 days 2 7
200 9 25/04/2021 B NAT 3 9
300 10 26/04/2021 C NAT 1 10
300 11 27/05/2021 C NAT 2 11
300 12 28/05/2021 C 103 1 day 2 11
</code></pre>
| 69,554,094
| 2021-10-13T02:16:17.900000
| 1
| 0
| -1
| 244
|
python|pandas
|
<p>You could do something like this:</p>
<pre><code>def days_diff(sdf):
result = pd.DataFrame(
{"days_diff": pd.NaT, "A": None}, index=sdf.index
)
start = sdf.at[sdf.index[0], "Date"]
for index, day, next_MID_is_na in zip(
sdf.index[1:], sdf.Date[1:], sdf.MID.shift(1).isna()[1:]
):
diff = (day - start).days
if diff <= 30 and next_MID_is_na:
result.at[index, "days_diff"] = diff
else:
start = day
result.A = result.days_diff.isna().cumsum()
return result
df[["days_diff", "A"]] = df[["CID", "Date", "MID"]].groupby("CID").apply(days_diff)
df["B"] = df.RefID.where(df.A != df.A.shift(1)).ffill()
</code></pre>
<p>Result for <code>df</code> created by</p>
<pre><code>from io import StringIO
data = StringIO(
'''
CID RefID Date Group MID
100 1 1/01/2021 A
100 2 3/01/2021 A
100 3 4/01/2021 A 101
100 4 15/01/2021 A
100 5 18/01/2021 A
200 6 3/03/2021 B
200 7 4/04/2021 B
200 8 9/04/2021 B 102
200 9 25/04/2021 B
300 10 26/04/2021 C
300 11 27/05/2021 C
300 12 28/05/2021 C 103
''')
df = pd.read_csv(data, delim_whitespace=True)
df.Date = pd.to_datetime(df.Date, format="%d/%m/%Y")
</code></pre>
<p>is</p>
<pre><code> CID RefID Date Group MID days_diff A B
0 100 1 2021-01-01 A NaN NaT 1 1.0
1 100 2 2021-01-03 A NaN 2 1 1.0
2 100 3 2021-01-04 A 101.0 3 1 1.0
3 100 4 2021-01-15 A NaN NaT 2 4.0
4 100 5 2021-01-18 A NaN 3 2 4.0
5 200 6 2021-03-03 B NaN NaT 1 6.0
6 200 7 2021-04-04 B NaN NaT 2 7.0
7 200 8 2021-04-09 B 102.0 5 2 7.0
8 200 9 2021-04-25 B NaN NaT 3 9.0
9 300 10 2021-04-26 C NaN NaT 1 10.0
10 300 11 2021-05-27 C NaN NaT 2 11.0
11 300 12 2021-05-28 C 103.0 1 2 11.0
</code></pre>
<hr />
<p>A few explanations:</p>
<ul>
<li>The function <code>days_diff</code> produces a dataframe with the two columns <code>days_diff</code> and <code>A</code>. It is applied to the grouped by column <code>CID</code> sub-dataframes of <code>df</code>.</li>
<li>First step: Initializing the result dataframe <code>result</code> (column <code>days_diff</code> filled with <code>NaT</code>, column <code>A</code> with <code>None</code>), and setting the starting value <code>start</code> for the day differences to the first day in the group.</li>
<li>Afterwards essentially looping over the sub-dataframe <em>after</em> the first index, thereby grabbing the index, the value in column <code>Date</code>, and a boolean value <code>next_MID_is_na</code> that signifies if the value of the <code>MID</code> column in the <em>next</em> row ist <code>NaN</code> (via <code>.shift(1).isna()</code>).</li>
<li>In every step of the loop:
<ol>
<li>Calculation of the difference of the current day to the start day.</li>
<li>Checking the rules for the <code>days_diff</code> column:
<ul>
<li>If difference of current and start day <= 30 days <em>and</em> <code>NaN</code> in next <code>MID</code>-row -> day-difference.</li>
<li>Otherwise -> reset of <code>start</code> to the current day.</li>
</ul>
</li>
</ol>
</li>
<li>After finishing column <code>days_diff</code> calculation of column <code>A</code>: <code>result.days_diff.isna()</code> is <code>True</code> (<code>== 1</code>) when <code>days_diff</code> is <code>NaN</code>, <code>False</code> (<code>== 0</code>) otherwise. Therefore the cummulative sum (<code>.cumsum()</code>) gives the required result.</li>
<li>After the <code>groupby-apply</code> to produce the columns <code>days_diff</code> and <code>A</code> finally the calculation of column <code>B</code>: Selection of <code>RefID</code>-values where the values <code>A</code> change (via <code>.where(df.A != df.A.shift(1))</code>), and then <em>forward</em> filling the remaining <code>NaN</code>s.</li>
</ul>
| 2021-10-13T10:45:49.067000
| 1
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
You could do something like this:
def days_diff(sdf):
result = pd.DataFrame(
{"days_diff": pd.NaT, "A": None}, index=sdf.index
)
start = sdf.at[sdf.index[0], "Date"]
for index, day, next_MID_is_na in zip(
sdf.index[1:], sdf.Date[1:], sdf.MID.shift(1).isna()[1:]
):
diff = (day - start).days
if diff <= 30 and next_MID_is_na:
result.at[index, "days_diff"] = diff
else:
start = day
result.A = result.days_diff.isna().cumsum()
return result
df[["days_diff", "A"]] = df[["CID", "Date", "MID"]].groupby("CID").apply(days_diff)
df["B"] = df.RefID.where(df.A != df.A.shift(1)).ffill()
Result for df created by
from io import StringIO
data = StringIO(
'''
CID RefID Date Group MID
100 1 1/01/2021 A
100 2 3/01/2021 A
100 3 4/01/2021 A 101
100 4 15/01/2021 A
100 5 18/01/2021 A
200 6 3/03/2021 B
200 7 4/04/2021 B
200 8 9/04/2021 B 102
200 9 25/04/2021 B
300 10 26/04/2021 C
300 11 27/05/2021 C
300 12 28/05/2021 C 103
''')
df = pd.read_csv(data, delim_whitespace=True)
df.Date = pd.to_datetime(df.Date, format="%d/%m/%Y")
is
CID RefID Date Group MID days_diff A B
0 100 1 2021-01-01 A NaN NaT 1 1.0
1 100 2 2021-01-03 A NaN 2 1 1.0
2 100 3 2021-01-04 A 101.0 3 1 1.0
3 100 4 2021-01-15 A NaN NaT 2 4.0
4 100 5 2021-01-18 A NaN 3 2 4.0
5 200 6 2021-03-03 B NaN NaT 1 6.0
6 200 7 2021-04-04 B NaN NaT 2 7.0
7 200 8 2021-04-09 B 102.0 5 2 7.0
8 200 9 2021-04-25 B NaN NaT 3 9.0
9 300 10 2021-04-26 C NaN NaT 1 10.0
10 300 11 2021-05-27 C NaN NaT 2 11.0
11 300 12 2021-05-28 C 103.0 1 2 11.0
A few explanations:
The function days_diff produces a dataframe with the two columns days_diff and A. It is applied to the grouped by column CID sub-dataframes of df.
First step: Initializing the result dataframe result (column days_diff filled with NaT, column A with None), and setting the starting value start for the day differences to the first day in the group.
Afterwards essentially looping over the sub-dataframe after the first index, thereby grabbing the index, the value in column Date, and a boolean value next_MID_is_na that signifies if the value of the MID column in the next row ist NaN (via .shift(1).isna()).
In every step of the loop:
Calculation of the difference of the current day to the start day.
Checking the rules for the days_diff column:
If difference of current and start day <= 30 days and NaN in next MID-row -> day-difference.
Otherwise -> reset of start to the current day.
After finishing column days_diff calculation of column A: result.days_diff.isna() is True (== 1) when days_diff is NaN, False (== 0) otherwise. Therefore the cummulative sum (.cumsum()) gives the required result.
After the groupby-apply to produce the columns days_diff and A finally the calculation of column B: Selection of RefID-values where the values A change (via .where(df.A != df.A.shift(1))), and then forward filling the remaining NaNs.
| 0
| 3,603
|
Groupby Pandas , calculate multiple columns based on date difference
I have a pandas dataframe shown below:
CID RefID Date Group MID
100 1 1/01/2021 A
100 2 3/01/2021 A
100 3 4/01/2021 A 101
100 4 15/01/2021 A
100 5 18/01/2021 A
200 6 3/03/2021 B
200 7 4/04/2021 B
200 8 9/04/2021 B 102
200 9 25/04/2021 B
300 10 26/04/2021 C
300 11 27/05/2021 C
300 12 28/05/2021 C 103
I want to create three columns:
days_diff:
This has to be created in a way that if the difference b/w the first Date and corresponding rows is greater than 30 belonging to the same CID then assign 'NAT' or 0 to the next row (reset) and then subtract the date with this row for the following values
If MIDis not null and belong to same CID group assign 'NAT' or 0 to the next row (reset)
and then subtract the date with this row for the following values
Otherwise just fetch the date difference b/w the first row belonging to the same CID for the corresponding rows
A:
This depends on the days_diff column , this column is like a counter it will only change/increment when there's another NAT occurrence for the same CID and reset itself for every CID.
B: This column depends on the column A , if the value in A remains same it won't change otherwise increments
It's a bit complicated to explain please refer to the output below for reference. I have used .groupby() .diff() and .shift() methods to create multiple dummy columns in order to calculate this and still working on it, please let me know the best way to go about this, thanks
My expected output :
CID RefID Date Group MID days_diff A B
100 1 1/01/2021 A NAT 1 1
100 2 3/01/2021 A 2 days 1 1
100 3 4/01/2021 A 101 3 days 1 1
100 4 15/01/2021 A NAT 2 4
100 5 18/01/2021 A 3 days 2 4
200 6 3/03/2021 B NAT 1 6
200 7 4/04/2021 B NAT 2 7
200 8 9/04/2021 B 102 5 days 2 7
200 9 25/04/2021 B NAT 3 9
300 10 26/04/2021 C NAT 1 10
300 11 27/05/2021 C NAT 2 11
300 12 28/05/2021 C 103 1 day 2 11
|
65,640,871
|
How to efficiently select several value ranges in Pandas?
|
<p>I was revising some old code and stumbled upon this snippet.</p>
<pre><code>df = pd.DataFrame({
'x': ['a', 'b', 'c', 'd', 'e', 'dc', 'ca', 'cd', 'cf', 'cv', 'cs', 'ca', 'ac', 'fc'],
'a': [34, 28, 51,5,120,12,45,56,67,54,34,32,1213,2]})
five = df[df['a'] < 5]
ten = df[(df['a'] > 5) & (df['a'] < 10)]
twenty = df[(df['a'] > 10) & (df['a'] < 20 )]
thirty = df[(df['a'] > 20) & (df['a'] < 30 )]
forty = df[(df['a'] > 30) & (df['a'] < 40 )]
fifty = df[(df['a'] > 40) & (df['a'] < 50 )]
sixty = df[(df['a'] > 50) & (df['a'] < 60)]
over = df[(df['a'] > 60)]
</code></pre>
<p>Basically there is a DataFrame and I need to group values that are within a certain range. There are multiple ranges. The grouped values are later used in a box plot.
The code above gets the job done but I am pretty sure there is a better way to do this!</p>
<p><strong>Question:</strong>
What if I need to create 1000 groups? I want to change the edge values and add new groups more dynamically. How can this be done?</p>
| 65,640,963
| 2021-01-09T08:54:20.903000
| 3
| 0
| 4
| 246
|
python|pandas
|
<p><em><strong>METHOD#1</strong></em></p>
<p>You can use <code>pd.cut</code> and the create dynamic groups and save them in a dictionary, ,the refer each keys for the individual dataframe:</p>
<pre><code>bins = [0,5,10,20,30,40,50,60,np.inf]
labels = ['five','ten','twenty','thirty','forty','fifty','sixty','over']
u = df1.assign(grp=pd.cut(df1['a'],bins,labels=labels))
d = dict(iter(u.groupby("grp")))
</code></pre>
<hr />
<p>test runs:</p>
<pre><code>print(f"""Group five is \n\n {d['five']}\n\n
Group forty is \n\n{d['forty']} \n\n Group over is \n\n{d['over']}""")
Group five is
x a grp
3 d 5 five
13 fc 2 five
Group forty is
x a grp
0 a 34 forty
10 cs 34 forty
11 ca 32 forty
Group forty is
x a grp
4 e 120 over
8 cf 67 over
12 ac 1213 over
</code></pre>
<p><em><strong>METHOD#2</strong></em>
you can also use <code>locals</code> for saving dictionary keys a local variables but the dict method is better:</p>
<pre><code>bins = [0,5,10,20,30,40,50,60,np.inf]
labels = ['five','ten','twenty','thirty','forty','fifty','sixty','over']
u = df1.assign(grp=pd.cut(df1['a'],bins,labels=labels))
d = dict(iter(u.groupby("grp")))
for k,v in d.items():
locals().update({k:v})
print(over,'\n\n',five,'\n\n',sixty)
x a grp
4 e 120 over
8 cf 67 over
12 ac 1213 over
x a grp
3 d 5 five
13 fc 2 five
x a grp
2 c 51 sixty
7 cd 56 sixty
9 cv 54 sixty
</code></pre>
| 2021-01-09T09:09:36.290000
| 1
|
https://pandas.pydata.org/docs/user_guide/indexing.html
|
METHOD#1
You can use pd.cut and the create dynamic groups and save them in a dictionary, ,the refer each keys for the individual dataframe:
bins = [0,5,10,20,30,40,50,60,np.inf]
labels = ['five','ten','twenty','thirty','forty','fifty','sixty','over']
u = df1.assign(grp=pd.cut(df1['a'],bins,labels=labels))
d = dict(iter(u.groupby("grp")))
test runs:
print(f"""Group five is \n\n {d['five']}\n\n
Group forty is \n\n{d['forty']} \n\n Group over is \n\n{d['over']}""")
Group five is
x a grp
3 d 5 five
13 fc 2 five
Group forty is
x a grp
0 a 34 forty
10 cs 34 forty
11 ca 32 forty
Group forty is
x a grp
4 e 120 over
8 cf 67 over
12 ac 1213 over
METHOD#2
you can also use locals for saving dictionary keys a local variables but the dict method is better:
bins = [0,5,10,20,30,40,50,60,np.inf]
labels = ['five','ten','twenty','thirty','forty','fifty','sixty','over']
u = df1.assign(grp=pd.cut(df1['a'],bins,labels=labels))
d = dict(iter(u.groupby("grp")))
for k,v in d.items():
locals().update({k:v})
print(over,'\n\n',five,'\n\n',sixty)
x a grp
4 e 120 over
8 cf 67 over
12 ac 1213 over
x a grp
3 d 5 five
13 fc 2 five
x a grp
2 c 51 sixty
7 cd 56 sixty
9 cv 54 sixty
| 0
| 1,330
|
How to efficiently select several value ranges in Pandas?
I was revising some old code and stumbled upon this snippet.
df = pd.DataFrame({
'x': ['a', 'b', 'c', 'd', 'e', 'dc', 'ca', 'cd', 'cf', 'cv', 'cs', 'ca', 'ac', 'fc'],
'a': [34, 28, 51,5,120,12,45,56,67,54,34,32,1213,2]})
five = df[df['a'] < 5]
ten = df[(df['a'] > 5) & (df['a'] < 10)]
twenty = df[(df['a'] > 10) & (df['a'] < 20 )]
thirty = df[(df['a'] > 20) & (df['a'] < 30 )]
forty = df[(df['a'] > 30) & (df['a'] < 40 )]
fifty = df[(df['a'] > 40) & (df['a'] < 50 )]
sixty = df[(df['a'] > 50) & (df['a'] < 60)]
over = df[(df['a'] > 60)]
Basically there is a DataFrame and I need to group values that are within a certain range. There are multiple ranges. The grouped values are later used in a box plot.
The code above gets the job done but I am pretty sure there is a better way to do this!
Question:
What if I need to create 1000 groups? I want to change the edge values and add new groups more dynamically. How can this be done?
|
69,871,395
|
how to expand pandas dataframe based on column value which is list of values
|
<p>I have dataframe like this:</p>
<pre><code>Col1 col2 col3
test0 [1,2,3] [ab,bc,cd]
</code></pre>
<p>Output dataframe I want is:</p>
<pre><code>col1 col2 col3
test0 1 ab
test0 2 bc
test0 3 cd
</code></pre>
<p>There would be multiple column like col2 with same length of list</p>
| 69,871,462
| 2021-11-07T09:57:43.140000
| 2
| null | 1
| 247
|
python|pandas
|
<p>You can do:</p>
<pre><code>outputdf_expandedcols=pd.DataFrame({
"col2":df.apply(lambda x: pd.Series(x['col2']),axis=1).stack().reset_index(level=1, drop=True),
"col3":df.apply(lambda x: pd.Series(x['col3']),axis=1).stack().reset_index(level=1, drop=True)
})
outputdf = df[['Col1']].join(outputdf_expandedcols,how='right')
</code></pre>
<p><code>outputdf</code> will be:</p>
<pre><code> Col1 col2 col3
0 test0 1 ab
0 test0 2 bc
0 test0 3 cd
</code></pre>
<p>If you have more columns to expand you can use a <a href="https://stackoverflow.com/questions/14507591/python-dictionary-comprehension">dict comprehension</a>:</p>
<pre><code>list_of_cols_to_expand = ["col2", "col3"] # put here the column names you want to expand
outputdf_expandedcols=pd.DataFrame({
col:df.apply(lambda x: pd.Series(x[col]),axis=1).stack().reset_index(level=1, drop=True) for col in list_of_cols_to_expand
})
outputdf = df[['Col1']].join(outputdf_expandedcols,how='right')
</code></pre>
<p>Output same as above.</p>
<p>This answer is based on <a href="https://stackoverflow.com/questions/27263805/pandas-column-of-lists-create-a-row-for-each-list-element">this</a> thread.</p>
| 2021-11-07T10:07:07.900000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html
|
pandas.DataFrame.explode#
pandas.DataFrame.explode#
DataFrame.explode(column, ignore_index=False)[source]#
Transform each element of a list-like to a row, replicating index values.
You can do:
outputdf_expandedcols=pd.DataFrame({
"col2":df.apply(lambda x: pd.Series(x['col2']),axis=1).stack().reset_index(level=1, drop=True),
"col3":df.apply(lambda x: pd.Series(x['col3']),axis=1).stack().reset_index(level=1, drop=True)
})
outputdf = df[['Col1']].join(outputdf_expandedcols,how='right')
outputdf will be:
Col1 col2 col3
0 test0 1 ab
0 test0 2 bc
0 test0 3 cd
If you have more columns to expand you can use a dict comprehension:
list_of_cols_to_expand = ["col2", "col3"] # put here the column names you want to expand
outputdf_expandedcols=pd.DataFrame({
col:df.apply(lambda x: pd.Series(x[col]),axis=1).stack().reset_index(level=1, drop=True) for col in list_of_cols_to_expand
})
outputdf = df[['Col1']].join(outputdf_expandedcols,how='right')
Output same as above.
This answer is based on this thread.
New in version 0.25.0.
Parameters
columnIndexLabelColumn(s) to explode.
For multiple columns, specify a non-empty list with each element
be str or tuple, and all specified columns their list-like data
on same row of the frame must have matching length.
New in version 1.3.0: Multi-column explode
ignore_indexbool, default FalseIf True, the resulting index will be labeled 0, 1, …, n - 1.
New in version 1.1.0.
Returns
DataFrameExploded lists to rows of the subset columns;
index will be duplicated for these rows.
Raises
ValueError
If columns of the frame are not unique.
If specified columns to explode is empty list.
If specified columns to explode have not matching count of
elements rowwise in the frame.
See also
DataFrame.unstackPivot a level of the (necessarily hierarchical) index labels.
DataFrame.meltUnpivot a DataFrame from wide format to long format.
Series.explodeExplode a DataFrame from list-like columns to long format.
Notes
This routine will explode list-likes including lists, tuples, sets,
Series, and np.ndarray. The result dtype of the subset rows will
be object. Scalars will be returned unchanged, and empty list-likes will
result in a np.nan for that row. In addition, the ordering of rows in the
output will be non-deterministic when exploding sets.
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],
... 'B': 1,
... 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
>>> df
A B C
0 [0, 1, 2] 1 [a, b, c]
1 foo 1 NaN
2 [] 1 []
3 [3, 4] 1 [d, e]
Single-column explode.
>>> df.explode('A')
A B C
0 0 1 [a, b, c]
0 1 1 [a, b, c]
0 2 1 [a, b, c]
1 foo 1 NaN
2 NaN 1 []
3 3 1 [d, e]
3 4 1 [d, e]
Multi-column explode.
>>> df.explode(list('AC'))
A B C
0 0 1 a
0 1 1 b
0 2 1 c
1 foo 1 NaN
2 NaN 1 NaN
3 3 1 d
3 4 1 e
| 185
| 1,054
|
how to expand pandas dataframe based on column value which is list of values
I have dataframe like this:
Col1 col2 col3
test0 [1,2,3] [ab,bc,cd]
Output dataframe I want is:
col1 col2 col3
test0 1 ab
test0 2 bc
test0 3 cd
There would be multiple column like col2 with same length of list
|
66,064,078
|
Iterate through date range - monthwise
|
<p>I have a Pandas DataFrame containing datetime as a column. I wish to iterate over it for each month. Currently, I can slice it using 'pandas.query()' using a hard code as follows:</p>
<pre><code># For January 2018-
data_jan = data.query("opened_at >= '2018-01-02 00:00:00' and opened_at <= '2018-01-31 23:59:59'")
</code></pre>
<p>However, to iterate for each month, I was looking for a code. So far, the code I have is:</p>
<pre><code>delta = timedelta(days = 30, hours = 0, minutes = 0, seconds = 0)
while start_date <= end_date:
print(start_date.strftime("%Y-%m-%d"))
start_date += delta
</code></pre>
<p>But, the while loop gives error:</p>
<blockquote>
<p>TypeError: can't compare datetime.datetime to datetime.date</p>
</blockquote>
<p>Help?</p>
| 66,064,116
| 2021-02-05T13:25:35.333000
| 1
| null | 0
| 264
|
python|pandas
|
<p>You can use month periods for <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.to_period.html" rel="nofollow noreferrer"><code>Series.dt.to_period</code></a> for groups by months:</p>
<pre><code>months = df['opened_at'].dt.to_period('m')
for month, g in df.groupby(months):
print (g)
</code></pre>
| 2021-02-05T13:28:12.383000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.date_range.html
|
pandas.date_range#
pandas.date_range#
pandas.date_range(start=None, end=None, periods=None, freq=None, tz=None, normalize=False, name=None, closed=_NoDefault.no_default, inclusive=None, **kwargs)[source]#
Return a fixed frequency DatetimeIndex.
Returns the range of equally spaced time points (where the difference between any
two adjacent points is specified by the given frequency) such that they all
satisfy start <[=] x <[=] end, where the first one and the last one are, resp.,
the first and last time points in that range that fall on the boundary of freq
(if given as a frequency string) or that are valid for freq (if given as a
pandas.tseries.offsets.DateOffset). (If exactly one of start,
You can use month periods for Series.dt.to_period for groups by months:
months = df['opened_at'].dt.to_period('m')
for month, g in df.groupby(months):
print (g)
end, or freq is not specified, this missing parameter can be computed
given periods, the number of timesteps in the range. See the note below.)
Parameters
startstr or datetime-like, optionalLeft bound for generating dates.
endstr or datetime-like, optionalRight bound for generating dates.
periodsint, optionalNumber of periods to generate.
freqstr or DateOffset, default ‘D’Frequency strings can have multiples, e.g. ‘5H’. See
here for a list of
frequency aliases.
tzstr or tzinfo, optionalTime zone name for returning localized DatetimeIndex, for example
‘Asia/Hong_Kong’. By default, the resulting DatetimeIndex is
timezone-naive.
normalizebool, default FalseNormalize start/end dates to midnight before generating date range.
namestr, default NoneName of the resulting DatetimeIndex.
closed{None, ‘left’, ‘right’}, optionalMake the interval closed with respect to the given frequency to
the ‘left’, ‘right’, or both sides (None, the default).
Deprecated since version 1.4.0: Argument closed has been deprecated to standardize boundary inputs.
Use inclusive instead, to set each bound as closed or open.
inclusive{“both”, “neither”, “left”, “right”}, default “both”Include boundaries; Whether to set each bound as closed or open.
New in version 1.4.0.
**kwargsFor compatibility. Has no effect on the result.
Returns
rngDatetimeIndex
See also
DatetimeIndexAn immutable container for datetimes.
timedelta_rangeReturn a fixed frequency TimedeltaIndex.
period_rangeReturn a fixed frequency PeriodIndex.
interval_rangeReturn a fixed frequency IntervalIndex.
Notes
Of the four parameters start, end, periods, and freq,
exactly three must be specified. If freq is omitted, the resulting
DatetimeIndex will have periods linearly spaced elements between
start and end (closed on both sides).
To learn more about the frequency strings, please see this link.
Examples
Specifying the values
The next four examples generate the same DatetimeIndex, but vary
the combination of start, end and periods.
Specify start and end, with the default daily frequency.
>>> pd.date_range(start='1/1/2018', end='1/08/2018')
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'],
dtype='datetime64[ns]', freq='D')
Specify start and periods, the number of periods (days).
>>> pd.date_range(start='1/1/2018', periods=8)
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'],
dtype='datetime64[ns]', freq='D')
Specify end and periods, the number of periods (days).
>>> pd.date_range(end='1/1/2018', periods=8)
DatetimeIndex(['2017-12-25', '2017-12-26', '2017-12-27', '2017-12-28',
'2017-12-29', '2017-12-30', '2017-12-31', '2018-01-01'],
dtype='datetime64[ns]', freq='D')
Specify start, end, and periods; the frequency is generated
automatically (linearly spaced).
>>> pd.date_range(start='2018-04-24', end='2018-04-27', periods=3)
DatetimeIndex(['2018-04-24 00:00:00', '2018-04-25 12:00:00',
'2018-04-27 00:00:00'],
dtype='datetime64[ns]', freq=None)
Other Parameters
Changed the freq (frequency) to 'M' (month end frequency).
>>> pd.date_range(start='1/1/2018', periods=5, freq='M')
DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31', '2018-04-30',
'2018-05-31'],
dtype='datetime64[ns]', freq='M')
Multiples are allowed
>>> pd.date_range(start='1/1/2018', periods=5, freq='3M')
DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31',
'2019-01-31'],
dtype='datetime64[ns]', freq='3M')
freq can also be specified as an Offset object.
>>> pd.date_range(start='1/1/2018', periods=5, freq=pd.offsets.MonthEnd(3))
DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31',
'2019-01-31'],
dtype='datetime64[ns]', freq='3M')
Specify tz to set the timezone.
>>> pd.date_range(start='1/1/2018', periods=5, tz='Asia/Tokyo')
DatetimeIndex(['2018-01-01 00:00:00+09:00', '2018-01-02 00:00:00+09:00',
'2018-01-03 00:00:00+09:00', '2018-01-04 00:00:00+09:00',
'2018-01-05 00:00:00+09:00'],
dtype='datetime64[ns, Asia/Tokyo]', freq='D')
inclusive controls whether to include start and end that are on the
boundary. The default, “both”, includes boundary points on either end.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive="both")
DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D')
Use inclusive='left' to exclude end if it falls on the boundary.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive='left')
DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03'],
dtype='datetime64[ns]', freq='D')
Use inclusive='right' to exclude start if it falls on the boundary, and
similarly inclusive='neither' will exclude both start and end.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive='right')
DatetimeIndex(['2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D')
| 703
| 869
|
Iterate through date range - monthwise
I have a Pandas DataFrame containing datetime as a column. I wish to iterate over it for each month. Currently, I can slice it using 'pandas.query()' using a hard code as follows:
# For January 2018-
data_jan = data.query("opened_at >= '2018-01-02 00:00:00' and opened_at <= '2018-01-31 23:59:59'")
However, to iterate for each month, I was looking for a code. So far, the code I have is:
delta = timedelta(days = 30, hours = 0, minutes = 0, seconds = 0)
while start_date <= end_date:
print(start_date.strftime("%Y-%m-%d"))
start_date += delta
But, the while loop gives error:
TypeError: can't compare datetime.datetime to datetime.date
Help?
|
67,131,190
|
multi query a dataframe table
|
<p>I have a dataframe as follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>student_id</th>
<th>gender</th>
<th>major</th>
<th>admitted</th>
</tr>
</thead>
<tbody>
<tr>
<td>35377</td>
<td>female</td>
<td>Chemistry</td>
<td>False</td>
</tr>
<tr>
<td>56105</td>
<td>male</td>
<td>Physics</td>
<td>True</td>
</tr>
</tbody>
</table>
</div>
<p>etc.</p>
<p>How do I find the admission rate for females?</p>
<p>I have tried:</p>
<pre><code>df.loc[(df['gender'] == "female") & (df['admitted'] == "True")].sum()
</code></pre>
<p>But this returns an error:</p>
<pre><code>TypeError: invalid type comparison
</code></pre>
| 67,131,420
| 2021-04-16T19:31:36.947000
| 2
| null | 0
| 31
|
python|pandas
|
<p>I guess the last column is Boolean. can you try this</p>
<pre><code>df[df['gender'] == "F"]['admitted'].sum()
</code></pre>
| 2021-04-16T19:51:18.197000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html
|
pandas.DataFrame.to_sql#
pandas.DataFrame.to_sql#
DataFrame.to_sql(name, con, schema=None, if_exists='fail', index=True, index_label=None, chunksize=None, dtype=None, method=None)[source]#
Write records stored in a DataFrame to a SQL database.
Databases supported by SQLAlchemy [1] are supported. Tables can be
newly created, appended to, or overwritten.
Parameters
namestrName of SQL table.
consqlalchemy.engine.(Engine or Connection) or sqlite3.ConnectionUsing SQLAlchemy makes it possible to use any DB supported by that
I guess the last column is Boolean. can you try this
df[df['gender'] == "F"]['admitted'].sum()
library. Legacy support is provided for sqlite3.Connection objects. The user
is responsible for engine disposal and connection closure for the SQLAlchemy
connectable See here.
schemastr, optionalSpecify the schema (if database flavor supports this). If None, use
default schema.
if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’How to behave if the table already exists.
fail: Raise a ValueError.
replace: Drop the table before inserting new values.
append: Insert new values to the existing table.
indexbool, default TrueWrite DataFrame index as a column. Uses index_label as the column
name in the table.
index_labelstr or sequence, default NoneColumn label for index column(s). If None is given (default) and
index is True, then the index names are used.
A sequence should be given if the DataFrame uses MultiIndex.
chunksizeint, optionalSpecify the number of rows in each batch to be written at a time.
By default, all rows will be written at once.
dtypedict or scalar, optionalSpecifying the datatype for columns. If a dictionary is used, the
keys should be the column names and the values should be the
SQLAlchemy types or strings for the sqlite3 legacy mode. If a
scalar is provided, it will be applied to all columns.
method{None, ‘multi’, callable}, optionalControls the SQL insertion clause used:
None : Uses standard SQL INSERT clause (one per row).
‘multi’: Pass multiple values in a single INSERT clause.
callable with signature (pd_table, conn, keys, data_iter).
Details and a sample callable implementation can be found in the
section insert method.
Returns
None or intNumber of rows affected by to_sql. None is returned if the callable
passed into method does not return an integer number of rows.
The number of returned rows affected is the sum of the rowcount
attribute of sqlite3.Cursor or SQLAlchemy connectable which may not
reflect the exact number of written rows as stipulated in the
sqlite3 or
SQLAlchemy.
New in version 1.4.0.
Raises
ValueErrorWhen the table already exists and if_exists is ‘fail’ (the
default).
See also
read_sqlRead a DataFrame from a table.
Notes
Timezone aware datetime columns will be written as
Timestamp with timezone type with SQLAlchemy if supported by the
database. Otherwise, the datetimes will be stored as timezone unaware
timestamps local to the original timezone.
References
1
https://docs.sqlalchemy.org
2
https://www.python.org/dev/peps/pep-0249/
Examples
Create an in-memory SQLite database.
>>> from sqlalchemy import create_engine
>>> engine = create_engine('sqlite://', echo=False)
Create a table from scratch with 3 rows.
>>> df = pd.DataFrame({'name' : ['User 1', 'User 2', 'User 3']})
>>> df
name
0 User 1
1 User 2
2 User 3
>>> df.to_sql('users', con=engine)
3
>>> engine.execute("SELECT * FROM users").fetchall()
[(0, 'User 1'), (1, 'User 2'), (2, 'User 3')]
An sqlalchemy.engine.Connection can also be passed to con:
>>> with engine.begin() as connection:
... df1 = pd.DataFrame({'name' : ['User 4', 'User 5']})
... df1.to_sql('users', con=connection, if_exists='append')
2
This is allowed to support operations that require that the same
DBAPI connection is used for the entire operation.
>>> df2 = pd.DataFrame({'name' : ['User 6', 'User 7']})
>>> df2.to_sql('users', con=engine, if_exists='append')
2
>>> engine.execute("SELECT * FROM users").fetchall()
[(0, 'User 1'), (1, 'User 2'), (2, 'User 3'),
(0, 'User 4'), (1, 'User 5'), (0, 'User 6'),
(1, 'User 7')]
Overwrite the table with just df2.
>>> df2.to_sql('users', con=engine, if_exists='replace',
... index_label='id')
2
>>> engine.execute("SELECT * FROM users").fetchall()
[(0, 'User 6'), (1, 'User 7')]
Specify the dtype (especially useful for integers with missing values).
Notice that while pandas is forced to store the data as floating point,
the database supports nullable integers. When fetching the data with
Python, we get back integer scalars.
>>> df = pd.DataFrame({"A": [1, None, 2]})
>>> df
A
0 1.0
1 NaN
2 2.0
>>> from sqlalchemy.types import Integer
>>> df.to_sql('integers', con=engine, index=False,
... dtype={"A": Integer()})
3
>>> engine.execute("SELECT * FROM integers").fetchall()
[(1,), (None,), (2,)]
| 531
| 626
|
multi query a dataframe table
I have a dataframe as follows:
student_id
gender
major
admitted
35377
female
Chemistry
False
56105
male
Physics
True
etc.
How do I find the admission rate for females?
I have tried:
df.loc[(df['gender'] == "female") & (df['admitted'] == "True")].sum()
But this returns an error:
TypeError: invalid type comparison
|
69,928,768
|
Expand/ungroup dataframe
|
<p>I have a dataframe with regions and their yearly goals. I would like to "ungroup" this into monthly goals.</p>
<p>There must be some easier way than this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame.from_records(data=[['region1',60000],['region2',120000]], columns=['region','goal'])
# region goal
#0 region1 60000
#1 region2 120000
data = {'month':[1,2,3,4,5,6,7,8,9,10,11,12]}
df2 = pd.DataFrame.from_dict(data)
df3 = df.merge(df2, how='cross')
df3['goal'] = (df3['goal']/12).astype(int)
# region goal month
# 0 region1 5000 1
# 1 region1 5000 2
# 2 region1 5000 3
# 3 region1 5000 4
# 4 region1 5000 5
# 5 region1 5000 6
# 6 region1 5000 7
# 7 region1 5000 8
# 8 region1 5000 9
# 9 region1 5000 10
# 10 region1 5000 11
# 11 region1 5000 12
# 12 region2 10000 1
# 13 region2 10000 2
# 14 region2 10000 3
# 15 region2 10000 4
# 16 region2 10000 5
# 17 region2 10000 6
# 18 region2 10000 7
# 19 region2 10000 8
# 20 region2 10000 9
# 21 region2 10000 10
# 22 region2 10000 11
# 23 region2 10000 12
</code></pre>
| 69,928,853
| 2021-11-11T13:04:29.457000
| 1
| null | 1
| 32
|
pandas
|
<p>My opinion your solution is good, alternative with <code>range</code> instead list is better:</p>
<pre><code>df3 = df.merge(pd.DataFrame({'month':range(1, 13)}), how='cross')
df3['goal'] = (df3['goal']/12).astype(int)
</code></pre>
<p>Another idea without cross join with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.repeat.html" rel="nofollow noreferrer"><code>Index.repeat</code></a> for new rows and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a> with <a href="https://numpy.org/doc/stable/reference/generated/numpy.tile.html" rel="nofollow noreferrer"><code>numpy.tile</code></a> for repeat months:</p>
<pre><code>df = (df.loc[df.index.repeat(12)]
.reset_index(drop=True)
.assign(goal = lambda x: x['goal']/12,
month = np.tile(range(1, 13), len(df.index))))
</code></pre>
| 2021-11-11T13:10:14.840000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html
|
pandas.DataFrame.explode#
pandas.DataFrame.explode#
DataFrame.explode(column, ignore_index=False)[source]#
Transform each element of a list-like to a row, replicating index values.
New in version 0.25.0.
Parameters
columnIndexLabelColumn(s) to explode.
For multiple columns, specify a non-empty list with each element
My opinion your solution is good, alternative with range instead list is better:
df3 = df.merge(pd.DataFrame({'month':range(1, 13)}), how='cross')
df3['goal'] = (df3['goal']/12).astype(int)
Another idea without cross join with Index.repeat for new rows and DataFrame.assign with numpy.tile for repeat months:
df = (df.loc[df.index.repeat(12)]
.reset_index(drop=True)
.assign(goal = lambda x: x['goal']/12,
month = np.tile(range(1, 13), len(df.index))))
be str or tuple, and all specified columns their list-like data
on same row of the frame must have matching length.
New in version 1.3.0: Multi-column explode
ignore_indexbool, default FalseIf True, the resulting index will be labeled 0, 1, …, n - 1.
New in version 1.1.0.
Returns
DataFrameExploded lists to rows of the subset columns;
index will be duplicated for these rows.
Raises
ValueError
If columns of the frame are not unique.
If specified columns to explode is empty list.
If specified columns to explode have not matching count of
elements rowwise in the frame.
See also
DataFrame.unstackPivot a level of the (necessarily hierarchical) index labels.
DataFrame.meltUnpivot a DataFrame from wide format to long format.
Series.explodeExplode a DataFrame from list-like columns to long format.
Notes
This routine will explode list-likes including lists, tuples, sets,
Series, and np.ndarray. The result dtype of the subset rows will
be object. Scalars will be returned unchanged, and empty list-likes will
result in a np.nan for that row. In addition, the ordering of rows in the
output will be non-deterministic when exploding sets.
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],
... 'B': 1,
... 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
>>> df
A B C
0 [0, 1, 2] 1 [a, b, c]
1 foo 1 NaN
2 [] 1 []
3 [3, 4] 1 [d, e]
Single-column explode.
>>> df.explode('A')
A B C
0 0 1 [a, b, c]
0 1 1 [a, b, c]
0 2 1 [a, b, c]
1 foo 1 NaN
2 NaN 1 []
3 3 1 [d, e]
3 4 1 [d, e]
Multi-column explode.
>>> df.explode(list('AC'))
A B C
0 0 1 a
0 1 1 b
0 2 1 c
1 foo 1 NaN
2 NaN 1 NaN
3 3 1 d
3 4 1 e
| 326
| 812
|
Expand/ungroup dataframe
I have a dataframe with regions and their yearly goals. I would like to "ungroup" this into monthly goals.
There must be some easier way than this:
import pandas as pd
df = pd.DataFrame.from_records(data=[['region1',60000],['region2',120000]], columns=['region','goal'])
# region goal
#0 region1 60000
#1 region2 120000
data = {'month':[1,2,3,4,5,6,7,8,9,10,11,12]}
df2 = pd.DataFrame.from_dict(data)
df3 = df.merge(df2, how='cross')
df3['goal'] = (df3['goal']/12).astype(int)
# region goal month
# 0 region1 5000 1
# 1 region1 5000 2
# 2 region1 5000 3
# 3 region1 5000 4
# 4 region1 5000 5
# 5 region1 5000 6
# 6 region1 5000 7
# 7 region1 5000 8
# 8 region1 5000 9
# 9 region1 5000 10
# 10 region1 5000 11
# 11 region1 5000 12
# 12 region2 10000 1
# 13 region2 10000 2
# 14 region2 10000 3
# 15 region2 10000 4
# 16 region2 10000 5
# 17 region2 10000 6
# 18 region2 10000 7
# 19 region2 10000 8
# 20 region2 10000 9
# 21 region2 10000 10
# 22 region2 10000 11
# 23 region2 10000 12
|
63,240,994
|
How to extract by referring to its date in pandas
|
<p>I have df like following one</p>
<pre><code>customer movement date
A buy 2019/5/4
A inquiry 2020/7/1
A cancel 2020/8/1
B buy 2019/6/1
B cancel 2020/8/1
</code></pre>
<p>I'd like to trace each customer's <code>movement</code> before <code>cancel</code></p>
<p>first, grouping by <code>customer</code></p>
<pre><code>A buy 2019/5/4
A inquiry 2020/7/1
A cancel 2020/8/1
</code></pre>
<p>Then I'd like to get <code>cancel date</code></p>
<pre><code>A cancel 2020/8/1
</code></pre>
<p>And then,I'd like to get <code>previous movement</code> before cancel <code>in 1 year</code>.</p>
<pre><code>customer movement date
A inquiry 2020/7/1
A cancel 2020/8/1
</code></pre>
<p>After that , I'd like to repeat in each <code>customers</code></p>
<p>So my desired result is like below</p>
<pre><code>customer movement date
A inquiry 2020/7/1
A cancel 2020/8/1
B cancel 2020/8/1
</code></pre>
<p>Are there any way to achieve this?
This is totally complicated that I couldn't handle such procedure..</p>
<p>Thanks</p>
| 63,241,082
| 2020-08-04T05:58:15.323000
| 1
| null | 2
| 32
|
python|pandas
|
<p>First convert column to datetimes and create Series with filtered only <code>cancel</code> rows by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a>:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
s = df[df['movement'].eq('cancel')].set_index('customer')['date']
</code></pre>
<p>Then mapping by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> years subtracted by 1 year and filter for less values of <code>date</code> column by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.lt.html" rel="nofollow noreferrer"><code>Series.lt</code></a> in <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>df = df[df['customer'].map(s.sub(pd.DateOffset(years=1))).lt(df['date'])]
print (df)
customer movement date
1 A inquiry 2020-07-01
2 A cancel 2020-08-01
4 B cancel 2020-08-01
</code></pre>
| 2020-08-04T06:06:03.253000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.Period.days_in_month.html
|
pandas.Period.days_in_month#
First convert column to datetimes and create Series with filtered only cancel rows by DataFrame.set_index:
df['date'] = pd.to_datetime(df['date'])
s = df[df['movement'].eq('cancel')].set_index('customer')['date']
Then mapping by Series.map years subtracted by 1 year and filter for less values of date column by Series.lt in boolean indexing:
df = df[df['customer'].map(s.sub(pd.DateOffset(years=1))).lt(df['date'])]
print (df)
customer movement date
1 A inquiry 2020-07-01
2 A cancel 2020-08-01
4 B cancel 2020-08-01
pandas.Period.days_in_month#
Period.days_in_month#
Get the total number of days in the month that this period falls on.
Returns
int
See also
Period.daysinmonthGets the number of days in the month.
DatetimeIndex.daysinmonthGets the number of days in the month.
calendar.monthrangeReturns a tuple containing weekday (0-6 ~ Mon-Sun) and number of days (28-31).
Examples
>>> p = pd.Period('2018-2-17')
>>> p.days_in_month
28
>>> pd.Period('2018-03-01').days_in_month
31
Handles the leap year case as well:
>>> p = pd.Period('2016-2-17')
>>> p.days_in_month
29
| 30
| 584
|
How to extract by referring to its date in pandas
I have df like following one
customer movement date
A buy 2019/5/4
A inquiry 2020/7/1
A cancel 2020/8/1
B buy 2019/6/1
B cancel 2020/8/1
I'd like to trace each customer's movement before cancel
first, grouping by customer
A buy 2019/5/4
A inquiry 2020/7/1
A cancel 2020/8/1
Then I'd like to get cancel date
A cancel 2020/8/1
And then,I'd like to get previous movement before cancel in 1 year.
customer movement date
A inquiry 2020/7/1
A cancel 2020/8/1
After that , I'd like to repeat in each customers
So my desired result is like below
customer movement date
A inquiry 2020/7/1
A cancel 2020/8/1
B cancel 2020/8/1
Are there any way to achieve this?
This is totally complicated that I couldn't handle such procedure..
Thanks
|
62,204,219
|
How to compare two dataframes using column index?
|
<p>I am exporting hdfs query output into a csv file using INSERT OVERWRITE LOCAL DIRECTORY command. Since this export the data without header.
I got another dataframe from Oracle output with file header which I need to compare against hdfs output.</p>
<pre class="lang-py prettyprint-override"><code>df1 = pd.read_csv('/home/User/hdfs_result.csv', header = None)
print(df1)
0 1 2
0 XPRN A 2019-12-16 00:00:00
1 XPRW I 2019-12-16 00:00:00
2 XPS2 I 2003-09-30 00:00:00
df = pd.read_sql(sqlquery, sqlconn)
UNIT STATUS Date
0 XPRN A 2019-12-16 00:00:00
1 XPRW A 2019-12-16 00:00:00
2 XPS2 I 2003-09-30 00:00:00
</code></pre>
<p>Since df1 is having no header i cant use Merge or Join to compare data. Though I can do df-df1. </p>
<p>Please suggest how can i compare and print the difference?</p>
| 62,204,345
| 2020-06-04T21:12:20.090000
| 1
| null | 0
| 32
|
python|pandas
|
<p>You can pass the underlying numpy array for comparison:</p>
<pre><code>df2.where(df2==df1.values)
</code></pre>
<p>Output (difference are masked as <code>NaN</code>)</p>
<pre><code> UNIT STATUS Date
0 XPRN A 2019-12-16 00:00:00
1 XPRW NaN 2019-12-16 00:00:00
2 XPS2 I 2003-09-30 00:00:00
</code></pre>
<p>For non matching row:</p>
<pre><code>df2[(df2!=df1.values).any(1)]
</code></pre>
| 2020-06-04T21:22:26.313000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.compare.html
|
pandas.DataFrame.compare#
pandas.DataFrame.compare#
DataFrame.compare(other, align_axis=1, keep_shape=False, keep_equal=False, result_names=('self', 'other'))[source]#
Compare to another DataFrame and show the differences.
New in version 1.1.0.
Parameters
otherDataFrameObject to compare with.
align_axis{0 or ‘index’, 1 or ‘columns’}, default 1Determine which axis to align the comparison on.
0, or ‘index’Resulting differences are stacked verticallywith rows drawn alternately from self and other.
1, or ‘columns’Resulting differences are aligned horizontallywith columns drawn alternately from self and other.
keep_shapebool, default FalseIf true, all rows and columns are kept.
Otherwise, only the ones with different values are kept.
keep_equalbool, default FalseIf true, the result keeps values that are equal.
Otherwise, equal values are shown as NaNs.
result_namestuple, default (‘self’, ‘other’)Set the dataframes names in the comparison.
You can pass the underlying numpy array for comparison:
df2.where(df2==df1.values)
Output (difference are masked as NaN)
UNIT STATUS Date
0 XPRN A 2019-12-16 00:00:00
1 XPRW NaN 2019-12-16 00:00:00
2 XPS2 I 2003-09-30 00:00:00
For non matching row:
df2[(df2!=df1.values).any(1)]
New in version 1.5.0.
Returns
DataFrameDataFrame that shows the differences stacked side by side.
The resulting index will be a MultiIndex with ‘self’ and ‘other’
stacked alternately at the inner level.
Raises
ValueErrorWhen the two DataFrames don’t have identical labels or shape.
See also
Series.compareCompare with another Series and show differences.
DataFrame.equalsTest whether two objects contain the same elements.
Notes
Matching NaNs will not appear as a difference.
Can only compare identically-labeled
(i.e. same shape, identical row and column labels) DataFrames
Examples
>>> df = pd.DataFrame(
... {
... "col1": ["a", "a", "b", "b", "a"],
... "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
... "col3": [1.0, 2.0, 3.0, 4.0, 5.0]
... },
... columns=["col1", "col2", "col3"],
... )
>>> df
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
>>> df2 = df.copy()
>>> df2.loc[0, 'col1'] = 'c'
>>> df2.loc[2, 'col3'] = 4.0
>>> df2
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
Align the differences on columns
>>> df.compare(df2)
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
Assign result_names
>>> df.compare(df2, result_names=("left", "right"))
col1 col3
left right left right
0 a c NaN NaN
2 NaN NaN 3.0 4.0
Stack the differences on rows
>>> df.compare(df2, align_axis=0)
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
Keep the equal values
>>> df.compare(df2, keep_equal=True)
col1 col3
self other self other
0 a c 1.0 1.0
2 b b 3.0 4.0
Keep all original rows and columns
>>> df.compare(df2, keep_shape=True)
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
Keep all original rows and columns and also all original values
>>> df.compare(df2, keep_shape=True, keep_equal=True)
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 970
| 1,289
|
How to compare two dataframes using column index?
I am exporting hdfs query output into a csv file using INSERT OVERWRITE LOCAL DIRECTORY command. Since this export the data without header.
I got another dataframe from Oracle output with file header which I need to compare against hdfs output.
df1 = pd.read_csv('/home/User/hdfs_result.csv', header = None)
print(df1)
0 1 2
0 XPRN A 2019-12-16 00:00:00
1 XPRW I 2019-12-16 00:00:00
2 XPS2 I 2003-09-30 00:00:00
df = pd.read_sql(sqlquery, sqlconn)
UNIT STATUS Date
0 XPRN A 2019-12-16 00:00:00
1 XPRW A 2019-12-16 00:00:00
2 XPS2 I 2003-09-30 00:00:00
Since df1 is having no header i cant use Merge or Join to compare data. Though I can do df-df1.
Please suggest how can i compare and print the difference?
|
67,600,810
|
Find local duplicates (which follow each other) in pandas
|
<p>I want to find local duplicates and give them a unique id, directly in pandas.</p>
<p><strong>Reallife example:</strong></p>
<p>Time-ordered purchase data where a customer id occures multiple times (because he visits a shop multiple times a week), but I want to identify occasions where the customer purches multiple items at the same time.</p>
<p><strong>My current approach would look like this:</strong></p>
<pre><code>def follow_ups(lst):
lst2 = [None] + lst[:-1]
i = 0
l = []
for e1, e2 in zip(lst, lst2):
if e1 != e2:
i += 1
l.append(i)
return l
follow_ups(['A', 'B', 'B', 'C', 'B', 'D', 'D', 'D', 'E', 'A', 'B', 'C'])
# [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 8, 9]
# for pandas
df['out'] = follow_ups(df['test'])
</code></pre>
<p>But I have the feeling there might be a much simpler and cleaner approach in pandas which I am unable to find.</p>
<p><strong>Pandas Sample data</strong></p>
<pre><code>import pandas as pd
df = pd.DataFrame({'test':['A', 'B', 'B', 'C', 'B', 'D', 'D', 'D', 'E', 'A', 'B', 'C']})
# test
# 0 A
# 1 B
# 2 B
# 3 C
# 4 B
# 5 D
# 6 D
# 7 D
# 8 E
# 9 A
# 10 B
# 11 C
df_out = pd.DataFrame({'test':['A', 'B', 'B', 'C', 'B', 'D', 'D', 'D', 'E', 'A', 'B', 'C'], 'out':[1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 8, 9]})
# test out
# 0 A 1
# 1 B 2
# 2 B 2
# 3 C 3
# 4 B 4
# 5 D 5
# 6 D 5
# 7 D 5
# 8 E 6
# 9 A 7
# 10 B 8
# 11 C 9
</code></pre>
| 67,600,912
| 2021-05-19T09:44:59.540000
| 1
| null | 0
| 37
|
python|pandas
|
<p>You can compare whether your column test is not equal to it's shifted version, using <code>shift()</code> with <code>ne()</code>, and use <code>cumsum()</code> on that:</p>
<pre><code>df['out'] = df['test'].ne(df['test'].shift()).cumsum()
</code></pre>
<p>Which prints:</p>
<pre><code>df
test out
0 A 1
1 B 2
2 B 2
3 C 3
4 B 4
5 D 5
6 D 5
7 D 5
8 E 6
9 A 7
10 B 8
11 C 9
</code></pre>
| 2021-05-19T09:51:22.560000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.duplicated.html
|
pandas.Series.duplicated#
pandas.Series.duplicated#
Series.duplicated(keep='first')[source]#
Indicate duplicate Series values.
Duplicated values are indicated as True values in the resulting
Series. Either all duplicates, all except the first or all except the
last occurrence of duplicates can be indicated.
Parameters
keep{‘first’, ‘last’, False}, default ‘first’Method to handle dropping duplicates:
‘first’ : Mark duplicates as True except for the first
occurrence.
‘last’ : Mark duplicates as True except for the last
occurrence.
False : Mark all duplicates as True.
Returns
Series[bool]Series indicating whether each value has occurred in the
preceding values.
See also
Index.duplicatedEquivalent method on pandas.Index.
DataFrame.duplicatedEquivalent method on pandas.DataFrame.
Series.drop_duplicatesRemove duplicate values from Series.
You can compare whether your column test is not equal to it's shifted version, using shift() with ne(), and use cumsum() on that:
df['out'] = df['test'].ne(df['test'].shift()).cumsum()
Which prints:
df
test out
0 A 1
1 B 2
2 B 2
3 C 3
4 B 4
5 D 5
6 D 5
7 D 5
8 E 6
9 A 7
10 B 8
11 C 9
Examples
By default, for each set of duplicated values, the first occurrence is
set on False and all others on True:
>>> animals = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama'])
>>> animals.duplicated()
0 False
1 False
2 True
3 False
4 True
dtype: bool
which is equivalent to
>>> animals.duplicated(keep='first')
0 False
1 False
2 True
3 False
4 True
dtype: bool
By using ‘last’, the last occurrence of each set of duplicated values
is set on False and all others on True:
>>> animals.duplicated(keep='last')
0 True
1 False
2 True
3 False
4 False
dtype: bool
By setting keep on False, all duplicates are True:
>>> animals.duplicated(keep=False)
0 True
1 False
2 True
3 False
4 True
dtype: bool
| 865
| 1,238
|
Find local duplicates (which follow each other) in pandas
I want to find local duplicates and give them a unique id, directly in pandas.
Reallife example:
Time-ordered purchase data where a customer id occures multiple times (because he visits a shop multiple times a week), but I want to identify occasions where the customer purches multiple items at the same time.
My current approach would look like this:
def follow_ups(lst):
lst2 = [None] + lst[:-1]
i = 0
l = []
for e1, e2 in zip(lst, lst2):
if e1 != e2:
i += 1
l.append(i)
return l
follow_ups(['A', 'B', 'B', 'C', 'B', 'D', 'D', 'D', 'E', 'A', 'B', 'C'])
# [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 8, 9]
# for pandas
df['out'] = follow_ups(df['test'])
But I have the feeling there might be a much simpler and cleaner approach in pandas which I am unable to find.
Pandas Sample data
import pandas as pd
df = pd.DataFrame({'test':['A', 'B', 'B', 'C', 'B', 'D', 'D', 'D', 'E', 'A', 'B', 'C']})
# test
# 0 A
# 1 B
# 2 B
# 3 C
# 4 B
# 5 D
# 6 D
# 7 D
# 8 E
# 9 A
# 10 B
# 11 C
df_out = pd.DataFrame({'test':['A', 'B', 'B', 'C', 'B', 'D', 'D', 'D', 'E', 'A', 'B', 'C'], 'out':[1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 8, 9]})
# test out
# 0 A 1
# 1 B 2
# 2 B 2
# 3 C 3
# 4 B 4
# 5 D 5
# 6 D 5
# 7 D 5
# 8 E 6
# 9 A 7
# 10 B 8
# 11 C 9
|
63,945,864
|
Grouping/Summarising and summing a data-frame based on another column
|
<p>I have a dataframe (<code>df</code>) created from a numpy array that looks like:</p>
<pre><code>0.22 1
0.31 1
0.91 1
0.48 2
0.2 2
0.09 2
0.9 3
0.71 3
0.73 3
0.65 4
0.16 4
0.9 4
0.75 5
0.87 5
0.72 5
0.68 6
0.54 6
0.48 6
</code></pre>
<p>I want to create a summary data-frame that sums values in the first column relative to its position in the second column. So my desired summary data-frame output from the above example would look like:</p>
<pre><code>3.68 total
2.79 total
3.83 total
</code></pre>
<p>Where:
the first value in the summary data-frame would be equal to: <code>0.22+0.48+0.9+0.65+0.75+0.68=3.68</code></p>
<p>the second value in the summary data-frame would be equal to: <code>0.31+0.2+0.71+0.16+0.87+0.54=2.79</code></p>
<p>the third value in the summary data-frame would be equal to:
<code>0.91+0.09+0.73+0.9+0.72+0.48=3.83</code></p>
| 63,945,905
| 2020-09-17T20:35:31.117000
| 2
| null | 1
| 39
|
python|pandas
|
<p>You can do groupby twice, one to label the relative position within each group, one to sum:</p>
<pre><code>df[0].groupby(df.groupby(df[1]).cumcount()).sum()
</code></pre>
<p>Output:</p>
<pre><code>0 3.68
1 2.79
2 3.83
Name: 0, dtype: float64
</code></pre>
<p><strong>Option 2</strong>: If all groups have equal number of elements, we can just reshape:</p>
<pre><code>df[0].values.reshape(df[1].max(),-1).sum(0)
# out
# array([3.68, 2.79, 3.83])
</code></pre>
| 2020-09-17T20:39:55.787000
| 2
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
You can do groupby twice, one to label the relative position within each group, one to sum:
df[0].groupby(df.groupby(df[1]).cumcount()).sum()
Output:
0 3.68
1 2.79
2 3.83
Name: 0, dtype: float64
Option 2: If all groups have equal number of elements, we can just reshape:
df[0].values.reshape(df[1].max(),-1).sum(0)
# out
# array([3.68, 2.79, 3.83])
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 259
| 619
|
Grouping/Summarising and summing a data-frame based on another column
I have a dataframe (df) created from a numpy array that looks like:
0.22 1
0.31 1
0.91 1
0.48 2
0.2 2
0.09 2
0.9 3
0.71 3
0.73 3
0.65 4
0.16 4
0.9 4
0.75 5
0.87 5
0.72 5
0.68 6
0.54 6
0.48 6
I want to create a summary data-frame that sums values in the first column relative to its position in the second column. So my desired summary data-frame output from the above example would look like:
3.68 total
2.79 total
3.83 total
Where:
the first value in the summary data-frame would be equal to: 0.22+0.48+0.9+0.65+0.75+0.68=3.68
the second value in the summary data-frame would be equal to: 0.31+0.2+0.71+0.16+0.87+0.54=2.79
the third value in the summary data-frame would be equal to:
0.91+0.09+0.73+0.9+0.72+0.48=3.83
|
63,532,969
|
How to create a new column from an existing column in a pandas dataframe
|
<p>I have the following pandas dataframe:</p>
<pre><code>df = pd.DataFrame([-0.167085, 0.009688, -0.034906, -2.393235, 1.006652],
index=['a', 'b', 'c', 'd', 'e'],
columns=['Feature Importances'])
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/syrML.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/syrML.png" alt="df - create new col" /></a></p>
<p>Without having to use any loops, what is the best way to create a new column (say called <code>Difference from 0</code>) where the values in this new <code>Difference from 0</code> column are the distances from 0 for each of the <code>Feature Importances</code> values. For eg. for <code>a</code>, <code>Difference from 0</code> value would be 0 - (-0.167085) = 0.167085, for <code>e</code>, <code>Difference from 0</code> value would be 1.006652 - 0 = 1.006652, etc.</p>
<p>Many thanks in advance.</p>
| 63,532,998
| 2020-08-22T05:46:32.677000
| 2
| null | 0
| 44
|
python|pandas
|
<p>assign a column that operates as the absolute value on the feature importance column:</p>
<pre><code>df["Difference from 0"] = df["Feature Importances"].abs()
</code></pre>
| 2020-08-22T05:51:09.647000
| 2
|
https://pandas.pydata.org/docs/getting_started/intro_tutorials/05_add_columns.html
|
How to create new columns derived from existing columns?#
In [1]: import pandas as pd
Data used for this tutorial:
Air quality data
For this tutorial, air quality data about \(NO_2\) is used, made
available by OpenAQ and using the
py-openaq package.
The air_quality_no2.csv data set provides \(NO_2\) values for
the measurement stations FR04014, BETR801 and London Westminster
in respectively Paris, Antwerp and London.
To raw data
In [2]: air_quality = pd.read_csv("data/air_quality_no2.csv", index_col=0, parse_dates=True)
In [3]: air_quality.head()
Out[3]:
station_antwerp station_paris station_london
datetime
2019-05-07 02:00:00 NaN NaN 23.0
2019-05-07 03:00:00 50.5 25.0 19.0
assign a column that operates as the absolute value on the feature importance column:
df["Difference from 0"] = df["Feature Importances"].abs()
2019-05-07 04:00:00 45.0 27.7 19.0
2019-05-07 05:00:00 NaN 50.4 16.0
2019-05-07 06:00:00 NaN 61.9 NaN
How to create new columns derived from existing columns?#
I want to express the \(NO_2\) concentration of the station in London in mg/m\(^3\).
(If we assume temperature of 25 degrees Celsius and pressure of 1013
hPa, the conversion factor is 1.882)
In [4]: air_quality["london_mg_per_cubic"] = air_quality["station_london"] * 1.882
In [5]: air_quality.head()
Out[5]:
station_antwerp ... london_mg_per_cubic
datetime ...
2019-05-07 02:00:00 NaN ... 43.286
2019-05-07 03:00:00 50.5 ... 35.758
2019-05-07 04:00:00 45.0 ... 35.758
2019-05-07 05:00:00 NaN ... 30.112
2019-05-07 06:00:00 NaN ... NaN
[5 rows x 4 columns]
To create a new column, use the [] brackets with the new column name
at the left side of the assignment.
Note
The calculation of the values is done element-wise. This
means all values in the given column are multiplied by the value 1.882
at once. You do not need to use a loop to iterate each of the rows!
I want to check the ratio of the values in Paris versus Antwerp and save the result in a new column.
In [6]: air_quality["ratio_paris_antwerp"] = (
...: air_quality["station_paris"] / air_quality["station_antwerp"]
...: )
...:
In [7]: air_quality.head()
Out[7]:
station_antwerp ... ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN ... NaN
2019-05-07 03:00:00 50.5 ... 0.495050
2019-05-07 04:00:00 45.0 ... 0.615556
2019-05-07 05:00:00 NaN ... NaN
2019-05-07 06:00:00 NaN ... NaN
[5 rows x 5 columns]
The calculation is again element-wise, so the / is applied for the
values in each row.
Also other mathematical operators (+, -, *, /,…) or
logical operators (<, >, ==,…) work element-wise. The latter was already
used in the subset data tutorial to filter
rows of a table using a conditional expression.
If you need more advanced logic, you can use arbitrary Python code via apply().
I want to rename the data columns to the corresponding station identifiers used by OpenAQ.
In [8]: air_quality_renamed = air_quality.rename(
...: columns={
...: "station_antwerp": "BETR801",
...: "station_paris": "FR04014",
...: "station_london": "London Westminster",
...: }
...: )
...:
In [9]: air_quality_renamed.head()
Out[9]:
BETR801 FR04014 ... london_mg_per_cubic ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN NaN ... 43.286 NaN
2019-05-07 03:00:00 50.5 25.0 ... 35.758 0.495050
2019-05-07 04:00:00 45.0 27.7 ... 35.758 0.615556
2019-05-07 05:00:00 NaN 50.4 ... 30.112 NaN
2019-05-07 06:00:00 NaN 61.9 ... NaN NaN
[5 rows x 5 columns]
The rename() function can be used for both row labels and column
labels. Provide a dictionary with the keys the current names and the
values the new names to update the corresponding names.
The mapping should not be restricted to fixed names only, but can be a
mapping function as well. For example, converting the column names to
lowercase letters can be done using a function as well:
In [10]: air_quality_renamed = air_quality_renamed.rename(columns=str.lower)
In [11]: air_quality_renamed.head()
Out[11]:
betr801 fr04014 ... london_mg_per_cubic ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN NaN ... 43.286 NaN
2019-05-07 03:00:00 50.5 25.0 ... 35.758 0.495050
2019-05-07 04:00:00 45.0 27.7 ... 35.758 0.615556
2019-05-07 05:00:00 NaN 50.4 ... 30.112 NaN
2019-05-07 06:00:00 NaN 61.9 ... NaN NaN
[5 rows x 5 columns]
To user guideDetails about column or row label renaming is provided in the user guide section on renaming labels.
REMEMBER
Create a new column by assigning the output to the DataFrame with a
new column name in between the [].
Operations are element-wise, no need to loop over rows.
Use rename with a dictionary or function to rename row labels or
column names.
To user guideThe user guide contains a separate section on column addition and deletion.
| 849
| 993
|
How to create a new column from an existing column in a pandas dataframe
I have the following pandas dataframe:
df = pd.DataFrame([-0.167085, 0.009688, -0.034906, -2.393235, 1.006652],
index=['a', 'b', 'c', 'd', 'e'],
columns=['Feature Importances'])
Output:
Without having to use any loops, what is the best way to create a new column (say called Difference from 0) where the values in this new Difference from 0 column are the distances from 0 for each of the Feature Importances values. For eg. for a, Difference from 0 value would be 0 - (-0.167085) = 0.167085, for e, Difference from 0 value would be 1.006652 - 0 = 1.006652, etc.
Many thanks in advance.
|
70,480,591
|
Count level 1 size per level 0 in multi index and add new column
|
<p>What is a pythonic way of counting level 1 size per level 0 in multi index and creating a new column (named <code>counts</code>). I can achieve this in the following way but would like to gain an understanding of any simpler approaches:</p>
<p><strong>Code</strong></p>
<pre><code>df = pd.DataFrame({'STNAME':['AL'] * 3 + ['MI'] * 4,
'CTYNAME':list('abcdefg'),
'COL': range(7) }).set_index(['STNAME','CTYNAME'])
print(df)
COL
STNAME CTYNAME
AL a 0
b 1
c 2
MI d 3
e 4
f 5
g 6
df1 = df.groupby(level=0).size().reset_index(name='count')
counts = df.merge(df1,left_on="STNAME",right_on="STNAME")["count"].values
df["counts"] = counts
</code></pre>
<p>This is the desired output:</p>
<pre><code> COL counts
STNAME CTYNAME
AL a 0 3
b 1 3
c 2 3
MI d 3 4
e 4 4
f 5 4
g 6 4
</code></pre>
| 70,480,778
| 2021-12-25T14:13:35.943000
| 1
| null | 0
| 46
|
python|pandas
|
<p>You can use <code>groupby.transform</code> with size here instead of merging:</p>
<pre><code>output = df.assign(Counts=df.groupby(level=0)['COL'].transform('size'))
</code></pre>
<hr />
<pre><code>print(output)
COL Counts
STNAME CTYNAME
AL a 0 3
b 1 3
c 2 3
MI d 3 4
e 4 4
f 5 4
g 6 4
</code></pre>
| 2021-12-25T14:48:35.507000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reset_index.html
|
pandas.DataFrame.reset_index#
pandas.DataFrame.reset_index#
DataFrame.reset_index(level=None, *, drop=False, inplace=False, col_level=0, col_fill='', allow_duplicates=_NoDefault.no_default, names=None)[source]#
You can use groupby.transform with size here instead of merging:
output = df.assign(Counts=df.groupby(level=0)['COL'].transform('size'))
print(output)
COL Counts
STNAME CTYNAME
AL a 0 3
b 1 3
c 2 3
MI d 3 4
e 4 4
f 5 4
g 6 4
Reset the index, or a level of it.
Reset the index of the DataFrame, and use the default one instead.
If the DataFrame has a MultiIndex, this method can remove one or more
levels.
Parameters
levelint, str, tuple, or list, default NoneOnly remove the given levels from the index. Removes all levels by
default.
dropbool, default FalseDo not try to insert index into dataframe columns. This resets
the index to the default integer index.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
col_levelint or str, default 0If the columns have multiple levels, determines which level the
labels are inserted into. By default it is inserted into the first
level.
col_fillobject, default ‘’If the columns have multiple levels, determines how the other
levels are named. If None then the index name is repeated.
allow_duplicatesbool, optional, default lib.no_defaultAllow duplicate column labels to be created.
New in version 1.5.0.
namesint, str or 1-dimensional list, default NoneUsing the given string, rename the DataFrame column which contains the
index data. If the DataFrame has a MultiIndex, this has to be a list or
tuple with length equal to the number of levels.
New in version 1.5.0.
Returns
DataFrame or NoneDataFrame with the new index or None if inplace=True.
See also
DataFrame.set_indexOpposite of reset_index.
DataFrame.reindexChange to new indices or expand indices.
DataFrame.reindex_likeChange to same indices as other DataFrame.
Examples
>>> df = pd.DataFrame([('bird', 389.0),
... ('bird', 24.0),
... ('mammal', 80.5),
... ('mammal', np.nan)],
... index=['falcon', 'parrot', 'lion', 'monkey'],
... columns=('class', 'max_speed'))
>>> df
class max_speed
falcon bird 389.0
parrot bird 24.0
lion mammal 80.5
monkey mammal NaN
When we reset the index, the old index is added as a column, and a
new sequential index is used:
>>> df.reset_index()
index class max_speed
0 falcon bird 389.0
1 parrot bird 24.0
2 lion mammal 80.5
3 monkey mammal NaN
We can use the drop parameter to avoid the old index being added as
a column:
>>> df.reset_index(drop=True)
class max_speed
0 bird 389.0
1 bird 24.0
2 mammal 80.5
3 mammal NaN
You can also use reset_index with MultiIndex.
>>> index = pd.MultiIndex.from_tuples([('bird', 'falcon'),
... ('bird', 'parrot'),
... ('mammal', 'lion'),
... ('mammal', 'monkey')],
... names=['class', 'name'])
>>> columns = pd.MultiIndex.from_tuples([('speed', 'max'),
... ('species', 'type')])
>>> df = pd.DataFrame([(389.0, 'fly'),
... ( 24.0, 'fly'),
... ( 80.5, 'run'),
... (np.nan, 'jump')],
... index=index,
... columns=columns)
>>> df
speed species
max type
class name
bird falcon 389.0 fly
parrot 24.0 fly
mammal lion 80.5 run
monkey NaN jump
Using the names parameter, choose a name for the index column:
>>> df.reset_index(names=['classes', 'names'])
classes names speed species
max type
0 bird falcon 389.0 fly
1 bird parrot 24.0 fly
2 mammal lion 80.5 run
3 mammal monkey NaN jump
If the index has multiple levels, we can reset a subset of them:
>>> df.reset_index(level='class')
class speed species
max type
name
falcon bird 389.0 fly
parrot bird 24.0 fly
lion mammal 80.5 run
monkey mammal NaN jump
If we are not dropping the index, by default, it is placed in the top
level. We can place it in another level:
>>> df.reset_index(level='class', col_level=1)
speed species
class max type
name
falcon bird 389.0 fly
parrot bird 24.0 fly
lion mammal 80.5 run
monkey mammal NaN jump
When the index is inserted under another level, we can specify under
which one with the parameter col_fill:
>>> df.reset_index(level='class', col_level=1, col_fill='species')
species speed species
class max type
name
falcon bird 389.0 fly
parrot bird 24.0 fly
lion mammal 80.5 run
monkey mammal NaN jump
If we specify a nonexistent level for col_fill, it is created:
>>> df.reset_index(level='class', col_level=1, col_fill='genus')
genus speed species
class max type
name
falcon bird 389.0 fly
parrot bird 24.0 fly
lion mammal 80.5 run
monkey mammal NaN jump
| 215
| 621
|
Count level 1 size per level 0 in multi index and add new column
What is a pythonic way of counting level 1 size per level 0 in multi index and creating a new column (named counts). I can achieve this in the following way but would like to gain an understanding of any simpler approaches:
Code
df = pd.DataFrame({'STNAME':['AL'] * 3 + ['MI'] * 4,
'CTYNAME':list('abcdefg'),
'COL': range(7) }).set_index(['STNAME','CTYNAME'])
print(df)
COL
STNAME CTYNAME
AL a 0
b 1
c 2
MI d 3
e 4
f 5
g 6
df1 = df.groupby(level=0).size().reset_index(name='count')
counts = df.merge(df1,left_on="STNAME",right_on="STNAME")["count"].values
df["counts"] = counts
This is the desired output:
COL counts
STNAME CTYNAME
AL a 0 3
b 1 3
c 2 3
MI d 3 4
e 4 4
f 5 4
g 6 4
|
60,579,576
|
operation AND in .str.contains methode
|
<p>i have toy dataframe</p>
<pre><code>toy_df = pd.DataFrame({'col1': ["apple is green",
"banana is yellow",
"strawberry is red"],
'col2': ["dog is kind",
"cat is smart",
"lion is brave"]})
</code></pre>
<p>with output: </p>
<pre><code> col1 col2
0 apple is green dog is kind
1 banana is yellow cat is smart
2 strawberry is red lion is brave
</code></pre>
<p>i have two targets to search, where first is</p>
<p><code>apple</code> or <code>lion</code>
and second one is <code>brave</code>:</p>
<p><code>target = 'apple|lion'
target2 = 'brave'</code></p>
<p>i can search with first target easily:</p>
<pre><code>toy_df[toy_df.apply(lambda x: x.str.contains(targets)).any(axis=1)]
</code></pre>
<p>outpus is:</p>
<pre><code> col1 col2
0 apple is green dog is kind
2 strawberry is red lion is brave
</code></pre>
<p>but how i shoud search with TWO targets (<code>target1 and target2</code>) ? to get this output:</p>
<pre><code> col1 col2
2 strawberry is red lion is brave
</code></pre>
| 60,579,622
| 2020-03-07T16:12:27.710000
| 1
| null | 1
| 46
|
python|pandas
|
<p>Use logical <strong>AND</strong> (<code>&</code>) to combine both boolean conditions:</p>
<h3>Example</h3>
<pre><code>mask1 = toy_df.apply(lambda x: x.str.contains(targets)).any(axis=1)
mask2 = toy_df.apply(lambda x: x.str.contains(target2)).any(axis=1)
toy_df[mask1 & mask2]
</code></pre>
<p>[out]</p>
<pre><code> col1 col2
2 strawberry is red lion is brave
</code></pre>
<hr>
<h3>Update - full code</h3>
<pre><code>toy_df = pd.DataFrame({'col1': ["apple is green",
"banana is yellow",
"strawberry is red"],
'col2': ["dog is kind",
"cat is smart",
"lion is brave"]})
targets = 'apple|lion'
target2 = 'brave'
mask1 = toy_df.apply(lambda x: x.str.contains(targets)).any(axis=1)
mask2 = toy_df.apply(lambda x: x.str.contains(target2)).any(axis=1)
toy_df[mask1 & mask2]
</code></pre>
| 2020-03-07T16:17:21.050000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.str.contains.html
|
pandas.Series.str.contains#
pandas.Series.str.contains#
Series.str.contains(pat, case=True, flags=0, na=None, regex=True)[source]#
Test if pattern or regex is contained within a string of a Series or Index.
Return boolean Series or Index based on whether a given pattern or regex is
contained within a string of a Series or Index.
Parameters
patstrCharacter sequence or regular expression.
casebool, default TrueIf True, case sensitive.
Use logical AND (&) to combine both boolean conditions:
Example
mask1 = toy_df.apply(lambda x: x.str.contains(targets)).any(axis=1)
mask2 = toy_df.apply(lambda x: x.str.contains(target2)).any(axis=1)
toy_df[mask1 & mask2]
[out]
col1 col2
2 strawberry is red lion is brave
Update - full code
toy_df = pd.DataFrame({'col1': ["apple is green",
"banana is yellow",
"strawberry is red"],
'col2': ["dog is kind",
"cat is smart",
"lion is brave"]})
targets = 'apple|lion'
target2 = 'brave'
mask1 = toy_df.apply(lambda x: x.str.contains(targets)).any(axis=1)
mask2 = toy_df.apply(lambda x: x.str.contains(target2)).any(axis=1)
toy_df[mask1 & mask2]
flagsint, default 0 (no flags)Flags to pass through to the re module, e.g. re.IGNORECASE.
nascalar, optionalFill value for missing values. The default depends on dtype of the
array. For object-dtype, numpy.nan is used. For StringDtype,
pandas.NA is used.
regexbool, default TrueIf True, assumes the pat is a regular expression.
If False, treats the pat as a literal string.
Returns
Series or Index of boolean valuesA Series or Index of boolean values indicating whether the
given pattern is contained within the string of each element
of the Series or Index.
See also
matchAnalogous, but stricter, relying on re.match instead of re.search.
Series.str.startswithTest if the start of each string element matches a pattern.
Series.str.endswithSame as startswith, but tests the end of string.
Examples
Returning a Series of booleans using only a literal pattern.
>>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
>>> s1.str.contains('og', regex=False)
0 False
1 True
2 False
3 False
4 NaN
dtype: object
Returning an Index of booleans using only a literal pattern.
>>> ind = pd.Index(['Mouse', 'dog', 'house and parrot', '23.0', np.NaN])
>>> ind.str.contains('23', regex=False)
Index([False, False, False, True, nan], dtype='object')
Specifying case sensitivity using case.
>>> s1.str.contains('oG', case=True, regex=True)
0 False
1 False
2 False
3 False
4 NaN
dtype: object
Specifying na to be False instead of NaN replaces NaN values
with False. If Series or Index does not contain NaN values
the resultant dtype will be bool, otherwise, an object dtype.
>>> s1.str.contains('og', na=False, regex=True)
0 False
1 True
2 False
3 False
4 False
dtype: bool
Returning ‘house’ or ‘dog’ when either expression occurs in a string.
>>> s1.str.contains('house|dog', regex=True)
0 False
1 True
2 True
3 False
4 NaN
dtype: object
Ignoring case sensitivity using flags with regex.
>>> import re
>>> s1.str.contains('PARROT', flags=re.IGNORECASE, regex=True)
0 False
1 False
2 True
3 False
4 NaN
dtype: object
Returning any digit using regular expression.
>>> s1.str.contains('\\d', regex=True)
0 False
1 False
2 False
3 True
4 NaN
dtype: object
Ensure pat is a not a literal pattern when regex is set to True.
Note in the following example one might expect only s2[1] and s2[3] to
return True. However, ‘.0’ as a regex matches any character
followed by a 0.
>>> s2 = pd.Series(['40', '40.0', '41', '41.0', '35'])
>>> s2.str.contains('.0', regex=True)
0 True
1 True
2 False
3 True
4 False
dtype: bool
| 444
| 1,233
|
operation AND in .str.contains methode
i have toy dataframe
toy_df = pd.DataFrame({'col1': ["apple is green",
"banana is yellow",
"strawberry is red"],
'col2': ["dog is kind",
"cat is smart",
"lion is brave"]})
with output:
col1 col2
0 apple is green dog is kind
1 banana is yellow cat is smart
2 strawberry is red lion is brave
i have two targets to search, where first is
apple or lion
and second one is brave:
target = 'apple|lion'
target2 = 'brave'
i can search with first target easily:
toy_df[toy_df.apply(lambda x: x.str.contains(targets)).any(axis=1)]
outpus is:
col1 col2
0 apple is green dog is kind
2 strawberry is red lion is brave
but how i shoud search with TWO targets (target1 and target2) ? to get this output:
col1 col2
2 strawberry is red lion is brave
|
64,494,424
|
Proper way to convert to an int
|
<p>What would be the proper way to do the following:</p>
<pre><code>df['file_size'] = np.where(df['file_size'], int(df['file_size']), None)
</code></pre>
<p>Currently the error I get is:</p>
<blockquote>
<p>TypeError: cannot convert the series to <class 'int'></p>
</blockquote>
<p>And when trying to do it directly:</p>
<pre><code>df['file_size'] = df['file_size'].astype(int)
</code></pre>
<blockquote>
<p>ValueError: Cannot convert non-finite values (NA or inf) to integer</p>
</blockquote>
| 64,494,444
| 2020-10-23T05:36:27.157000
| 1
| null | 1
| 47
|
python|pandas
|
<p>Use <a href="https://pandas.pydata.org/docs/user_guide/integer_na.html" rel="nofollow noreferrer"><code>integer_na</code></a> because by default at least one missing value convert integers to floats (because <code>NaN</code> is <code>float</code>), so need special type <code>Int64</code> for resolve this problem:</p>
<pre><code>df = pd.DataFrame({'file_size':[5,4,np.nan,None]})
df['file_size'] = df['file_size'].astype("Int64")
print (df)
file_size
0 5
1 4
2 <NA>
3 <NA>
</code></pre>
| 2020-10-23T05:37:52.213000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.convert_dtypes.html
|
pandas.DataFrame.convert_dtypes#
pandas.DataFrame.convert_dtypes#
Use integer_na because by default at least one missing value convert integers to floats (because NaN is float), so need special type Int64 for resolve this problem:
df = pd.DataFrame({'file_size':[5,4,np.nan,None]})
df['file_size'] = df['file_size'].astype("Int64")
print (df)
file_size
0 5
1 4
2 <NA>
3 <NA>
DataFrame.convert_dtypes(infer_objects=True, convert_string=True, convert_integer=True, convert_boolean=True, convert_floating=True)[source]#
Convert columns to best possible dtypes using dtypes supporting pd.NA.
New in version 1.0.0.
Parameters
infer_objectsbool, default TrueWhether object dtypes should be converted to the best possible types.
convert_stringbool, default TrueWhether object dtypes should be converted to StringDtype().
convert_integerbool, default TrueWhether, if possible, conversion can be done to integer extension types.
convert_booleanbool, defaults TrueWhether object dtypes should be converted to BooleanDtypes().
convert_floatingbool, defaults TrueWhether, if possible, conversion can be done to floating extension types.
If convert_integer is also True, preference will be give to integer
dtypes if the floats can be faithfully casted to integers.
New in version 1.2.0.
Returns
Series or DataFrameCopy of input object with new dtype.
See also
infer_objectsInfer dtypes of objects.
to_datetimeConvert argument to datetime.
to_timedeltaConvert argument to timedelta.
to_numericConvert argument to a numeric type.
Notes
By default, convert_dtypes will attempt to convert a Series (or each
Series in a DataFrame) to dtypes that support pd.NA. By using the options
convert_string, convert_integer, convert_boolean and
convert_boolean, it is possible to turn off individual conversions
to StringDtype, the integer extension types, BooleanDtype
or floating extension types, respectively.
For object-dtyped columns, if infer_objects is True, use the inference
rules as during normal Series/DataFrame construction. Then, if possible,
convert to StringDtype, BooleanDtype or an appropriate integer
or floating extension type, otherwise leave as object.
If the dtype is integer, convert to an appropriate integer extension type.
If the dtype is numeric, and consists of all integers, convert to an
appropriate integer extension type. Otherwise, convert to an
appropriate floating extension type.
Changed in version 1.2: Starting with pandas 1.2, this method also converts float columns
to the nullable floating extension type.
In the future, as new dtypes are added that support pd.NA, the results
of this method will change to support those new dtypes.
Examples
>>> df = pd.DataFrame(
... {
... "a": pd.Series([1, 2, 3], dtype=np.dtype("int32")),
... "b": pd.Series(["x", "y", "z"], dtype=np.dtype("O")),
... "c": pd.Series([True, False, np.nan], dtype=np.dtype("O")),
... "d": pd.Series(["h", "i", np.nan], dtype=np.dtype("O")),
... "e": pd.Series([10, np.nan, 20], dtype=np.dtype("float")),
... "f": pd.Series([np.nan, 100.5, 200], dtype=np.dtype("float")),
... }
... )
Start with a DataFrame with default dtypes.
>>> df
a b c d e f
0 1 x True h 10.0 NaN
1 2 y False i NaN 100.5
2 3 z NaN NaN 20.0 200.0
>>> df.dtypes
a int32
b object
c object
d object
e float64
f float64
dtype: object
Convert the DataFrame to use best possible dtypes.
>>> dfn = df.convert_dtypes()
>>> dfn
a b c d e f
0 1 x True h 10 <NA>
1 2 y False i <NA> 100.5
2 3 z <NA> <NA> 20 200.0
>>> dfn.dtypes
a Int32
b string
c boolean
d string
e Int64
f Float64
dtype: object
Start with a Series of strings and missing data represented by np.nan.
>>> s = pd.Series(["a", "b", np.nan])
>>> s
0 a
1 b
2 NaN
dtype: object
Obtain a Series with dtype StringDtype.
>>> s.convert_dtypes()
0 a
1 b
2 <NA>
dtype: string
| 70
| 413
|
Proper way to convert to an int
What would be the proper way to do the following:
df['file_size'] = np.where(df['file_size'], int(df['file_size']), None)
Currently the error I get is:
TypeError: cannot convert the series to <class 'int'>
And when trying to do it directly:
df['file_size'] = df['file_size'].astype(int)
ValueError: Cannot convert non-finite values (NA or inf) to integer
|
66,979,642
|
How to remove values in a column of dataframe by position?
|
<p>I have a data frame with following values:</p>
<pre><code>data = {'Type':['sama', 'samb', 'samc', 'samd'],
'Annotation':["a|b|c|d||||j|s|,g|b|k|d||||j|s|o,c|j|k|m||||j|k|o",
"g|b|k|d|e|||j|s|o",
"g|b|k|d|||z|j|s|o,c|j|k|m||||g|s|o",
"g|b|k|d|e|y|u|j|h|i"]}
df1 = pd.DataFrame(data)
df1
</code></pre>
<p>It looks as follows:</p>
<pre><code> Type Annotation
0 sama a|b|c|d||||j|s|,g|b|k|d||||j|s|o,c|j|k|m||||j|k|o
1 samb g|b|k|d|e|||j|s|o
2 samc g|b|k|d|||z|j|s|o,c|j|k|m||||g|s|o
3 samd g|b|k|d|e|y|u|j|h|i
</code></pre>
<p>The <code>df1['Annotation']</code> has data that are separated by <code>comma</code>. The data contains 10 values separated by <code>|</code>.</p>
<p>I want to remove ceratin values in each data by position. Here I want to remove from position <code>3,6,7,8</code> so that the output looks like:</p>
<pre><code> Type Annotation
0 sama a|b|d||s|,g|b|d||s|o,c|jm||k|o
1 samb g|b|d|e|s|o
2 samc g|b|d||s|o,c|j|m||s|o
3 samd g|b|d|e|h|i
</code></pre>
| 66,979,678
| 2021-04-07T04:38:51.810000
| 1
| null | 1
| 47
|
python|pandas
|
<p>Use custom lambda function with split by <code>,</code>, then split by <code>|</code> and filter values not exist in <code>pos list</code>, last join by <code>|</code> and <code>,</code>:</p>
<pre><code>pos = [3,6,7,8]
f = lambda x: ','.join('|'.join(z for i, z in enumerate(y.split('|')) if i + 1 not in pos)
for y in x.split(','))
df1['Annotation'] = df1['Annotation'].apply(f)
print (df1)
Type Annotation
0 sama a|b|d||s|,g|b|d||s|o,c|j|m||k|o
1 samb g|b|d|e|s|o
2 samc g|b|d||s|o,c|j|m||s|o
3 samd g|b|d|e|h|i
</code></pre>
| 2021-04-07T04:43:51.950000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.drop.html
|
pandas.DataFrame.drop#
pandas.DataFrame.drop#
DataFrame.drop(labels=None, *, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise')[source]#
Drop specified labels from rows or columns.
Remove rows or columns by specifying label names and corresponding
axis, or by specifying directly index or column names. When using a
Use custom lambda function with split by ,, then split by | and filter values not exist in pos list, last join by | and ,:
pos = [3,6,7,8]
f = lambda x: ','.join('|'.join(z for i, z in enumerate(y.split('|')) if i + 1 not in pos)
for y in x.split(','))
df1['Annotation'] = df1['Annotation'].apply(f)
print (df1)
Type Annotation
0 sama a|b|d||s|,g|b|d||s|o,c|j|m||k|o
1 samb g|b|d|e|s|o
2 samc g|b|d||s|o,c|j|m||s|o
3 samd g|b|d|e|h|i
multi-index, labels on different levels can be removed by specifying
the level. See the user guide <advanced.shown_levels>
for more information about the now unused levels.
Parameters
labelssingle label or list-likeIndex or column labels to drop. A tuple will be used as a single
label and not treated as a list-like.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Whether to drop labels from the index (0 or ‘index’) or
columns (1 or ‘columns’).
indexsingle label or list-likeAlternative to specifying axis (labels, axis=0
is equivalent to index=labels).
columnssingle label or list-likeAlternative to specifying axis (labels, axis=1
is equivalent to columns=labels).
levelint or level name, optionalFor MultiIndex, level from which the labels will be removed.
inplacebool, default FalseIf False, return a copy. Otherwise, do operation
inplace and return None.
errors{‘ignore’, ‘raise’}, default ‘raise’If ‘ignore’, suppress error and only existing labels are
dropped.
Returns
DataFrame or NoneDataFrame without the removed index or column labels or
None if inplace=True.
Raises
KeyErrorIf any of the labels is not found in the selected axis.
See also
DataFrame.locLabel-location based indexer for selection by label.
DataFrame.dropnaReturn DataFrame with labels on given axis omitted where (all or any) data are missing.
DataFrame.drop_duplicatesReturn DataFrame with duplicate rows removed, optionally only considering certain columns.
Series.dropReturn Series with specified index labels removed.
Examples
>>> df = pd.DataFrame(np.arange(12).reshape(3, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df
A B C D
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
Drop columns
>>> df.drop(['B', 'C'], axis=1)
A D
0 0 3
1 4 7
2 8 11
>>> df.drop(columns=['B', 'C'])
A D
0 0 3
1 4 7
2 8 11
Drop a row by index
>>> df.drop([0, 1])
A B C D
2 8 9 10 11
Drop columns and/or rows of MultiIndex DataFrame
>>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
... ['speed', 'weight', 'length']],
... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
... [0, 1, 2, 0, 1, 2, 0, 1, 2]])
>>> df = pd.DataFrame(index=midx, columns=['big', 'small'],
... data=[[45, 30], [200, 100], [1.5, 1], [30, 20],
... [250, 150], [1.5, 0.8], [320, 250],
... [1, 0.8], [0.3, 0.2]])
>>> df
big small
lama speed 45.0 30.0
weight 200.0 100.0
length 1.5 1.0
cow speed 30.0 20.0
weight 250.0 150.0
length 1.5 0.8
falcon speed 320.0 250.0
weight 1.0 0.8
length 0.3 0.2
Drop a specific index combination from the MultiIndex
DataFrame, i.e., drop the combination 'falcon' and
'weight', which deletes only the corresponding row
>>> df.drop(index=('falcon', 'weight'))
big small
lama speed 45.0 30.0
weight 200.0 100.0
length 1.5 1.0
cow speed 30.0 20.0
weight 250.0 150.0
length 1.5 0.8
falcon speed 320.0 250.0
length 0.3 0.2
>>> df.drop(index='cow', columns='small')
big
lama speed 45.0
weight 200.0
length 1.5
falcon speed 320.0
weight 1.0
length 0.3
>>> df.drop(index='length', level=1)
big small
lama speed 45.0 30.0
weight 200.0 100.0
cow speed 30.0 20.0
weight 250.0 150.0
falcon speed 320.0 250.0
weight 1.0 0.8
| 346
| 846
|
How to remove values in a column of dataframe by position?
I have a data frame with following values:
data = {'Type':['sama', 'samb', 'samc', 'samd'],
'Annotation':["a|b|c|d||||j|s|,g|b|k|d||||j|s|o,c|j|k|m||||j|k|o",
"g|b|k|d|e|||j|s|o",
"g|b|k|d|||z|j|s|o,c|j|k|m||||g|s|o",
"g|b|k|d|e|y|u|j|h|i"]}
df1 = pd.DataFrame(data)
df1
It looks as follows:
Type Annotation
0 sama a|b|c|d||||j|s|,g|b|k|d||||j|s|o,c|j|k|m||||j|k|o
1 samb g|b|k|d|e|||j|s|o
2 samc g|b|k|d|||z|j|s|o,c|j|k|m||||g|s|o
3 samd g|b|k|d|e|y|u|j|h|i
The df1['Annotation'] has data that are separated by comma. The data contains 10 values separated by |.
I want to remove ceratin values in each data by position. Here I want to remove from position 3,6,7,8 so that the output looks like:
Type Annotation
0 sama a|b|d||s|,g|b|d||s|o,c|jm||k|o
1 samb g|b|d|e|s|o
2 samc g|b|d||s|o,c|j|m||s|o
3 samd g|b|d|e|h|i
|
67,775,134
|
Duplicate DataFrame with new columns with different values
|
<p>I have a pandas DataFrame.</p>
<pre><code>import pandas as pd
import numpy as np
from datetime import datetime
d = {'account_id': [1, 2], 'type': ['a', 'b']}
df = pd.DataFrame(data=d)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">account_id</th>
<th style="text-align: left;">type</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">a</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">b</td>
</tr>
</tbody>
</table>
</div>
<p>I want to add two columns <code>include_images</code> and <code>data_since</code> which should contain one time <code>True</code> and a date for every row of the original DataFrame and one time <code>False</code> and <code>NaN</code> for every row of the original DataFrame.</p>
<p>Is there a more efficient way of writing this then like so:</p>
<pre><code>df_a = df.copy()
df_a['include_images'] = True
df_a['data_since'] = datetime(2018, 1, 1)
df_b = df.copy()
df_b['include_images'] = False
df_b['data_since'] = np.nan
df = pd.concat([df_a, df_b], ignore_index=True)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">account_id</th>
<th style="text-align: left;">type</th>
<th style="text-align: left;">include_images</th>
<th style="text-align: left;">data_since</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">a</td>
<td style="text-align: left;">True</td>
<td style="text-align: left;">2018-01-01 00:00:00</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">b</td>
<td style="text-align: left;">True</td>
<td style="text-align: left;">2018-01-01 00:00:00</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">a</td>
<td style="text-align: left;">False</td>
<td style="text-align: left;">NaT</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">b</td>
<td style="text-align: left;">False</td>
<td style="text-align: left;">NaT</td>
</tr>
</tbody>
</table>
</div>
| 67,775,582
| 2021-05-31T13:55:22.043000
| 2
| null | 1
| 48
|
python|pandas
|
<p>Try <code>assign</code> which creates a copy on-the-fly:</p>
<pre><code>const_date = pd.Timestamp('2018-01-01')
out = pd.concat([df.assign(include_img=True, data_since=const_date),
df.assign(include_img=False, data_since=pd.NaT)],
ignore_index=True)
</code></pre>
<p>Output:</p>
<pre><code> account_id type include_img data_since
0 1 a True 2018-01-01
1 2 b True 2018-01-01
2 1 a False NaT
3 2 b False NaT
</code></pre>
| 2021-05-31T14:24:59.233000
| 2
|
https://pandas.pydata.org/docs/dev/user_guide/merging.html
|
Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
Try assign which creates a copy on-the-fly:
const_date = pd.Timestamp('2018-01-01')
out = pd.concat([df.assign(include_img=True, data_since=const_date),
df.assign(include_img=False, data_since=pd.NaT)],
ignore_index=True)
Output:
account_id type include_img data_since
0 1 a True 2018-01-01
1 2 b True 2018-01-01
2 1 a False NaT
3 2 b False NaT
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 733
| 1,213
|
Duplicate DataFrame with new columns with different values
I have a pandas DataFrame.
import pandas as pd
import numpy as np
from datetime import datetime
d = {'account_id': [1, 2], 'type': ['a', 'b']}
df = pd.DataFrame(data=d)
account_id
type
0
1
a
1
2
b
I want to add two columns include_images and data_since which should contain one time True and a date for every row of the original DataFrame and one time False and NaN for every row of the original DataFrame.
Is there a more efficient way of writing this then like so:
df_a = df.copy()
df_a['include_images'] = True
df_a['data_since'] = datetime(2018, 1, 1)
df_b = df.copy()
df_b['include_images'] = False
df_b['data_since'] = np.nan
df = pd.concat([df_a, df_b], ignore_index=True)
account_id
type
include_images
data_since
0
1
a
True
2018-01-01 00:00:00
1
2
b
True
2018-01-01 00:00:00
2
1
a
False
NaT
3
2
b
False
NaT
|
67,005,037
|
Rules in String into in Pandas
|
<p>I would like to use Pandas to read simple rules in string.</p>
<p>I write this and it's worked :</p>
<pre><code>import numpy as np
import pandas as pd
d = {'col1': [1, 2], 'col2': [3, 4]}
df = pd.DataFrame(data=d)
a = "(df['col1'] > 1) & (df['col2'] > 3)"
df[eval(a)]
</code></pre>
<p>But, I would like to remove df[] and '()' in my string value.</p>
<p>I don't know if it's possible, or maybe another way exists to write this.</p>
<p>Thank you</p>
| 67,005,119
| 2021-04-08T13:28:40.147000
| 1
| null | 0
| 50
|
python|pandas
|
<pre><code>>>> import numpy as np
>>> import pandas as pd
>>> d = {'col1': [1, 2], 'col2': [3, 4]}
>>> df = pd.DataFrame(data=d)
>>> df
| | col1 | col2 |
|---:|-------:|-------:|
| 0 | 1 | 3 |
| 1 | 2 | 4 |
>>> a = 'col1 > 1 & col2 > 3'
>>> df.query(a)
| | col1 | col2 |
|---:|-------:|-------:|
| 1 | 2 | 4 |
</code></pre>
| 2021-04-08T13:32:47
| 2
|
https://pandas.pydata.org/docs/development/contributing_docstring.html
|
pandas docstring guide#
pandas docstring guide#
About docstrings and standards#
A Python docstring is a string used to document a Python module, class,
function or method, so programmers can understand what it does without having
to read the details of the implementation.
Also, it is a common practice to generate online (html) documentation
automatically from docstrings. Sphinx serves
this purpose.
The next example gives an idea of what a docstring looks like:
def add(num1, num2):
"""
Add up two integer numbers.
This function simply wraps the ``+`` operator, and does not
do anything interesting, except for illustrating what
the docstring of a very simple function looks like.
Parameters
----------
num1 : int
First number to add.
num2 : int
Second number to add.
Returns
-------
>>> import numpy as np
>>> import pandas as pd
>>> d = {'col1': [1, 2], 'col2': [3, 4]}
>>> df = pd.DataFrame(data=d)
>>> df
| | col1 | col2 |
|---:|-------:|-------:|
| 0 | 1 | 3 |
| 1 | 2 | 4 |
>>> a = 'col1 > 1 & col2 > 3'
>>> df.query(a)
| | col1 | col2 |
|---:|-------:|-------:|
| 1 | 2 | 4 |
int
The sum of ``num1`` and ``num2``.
See Also
--------
subtract : Subtract one integer from another.
Examples
--------
>>> add(2, 2)
4
>>> add(25, 0)
25
>>> add(10, -10)
0
"""
return num1 + num2
Some standards regarding docstrings exist, which make them easier to read, and allow them
be easily exported to other formats such as html or pdf.
The first conventions every Python docstring should follow are defined in
PEP-257.
As PEP-257 is quite broad, other more specific standards also exist. In the
case of pandas, the NumPy docstring convention is followed. These conventions are
explained in this document:
numpydoc docstring guide
(which is based in the original Guide to NumPy/SciPy documentation)
numpydoc is a Sphinx extension to support the NumPy docstring convention.
The standard uses reStructuredText (reST). reStructuredText is a markup
language that allows encoding styles in plain text files. Documentation
about reStructuredText can be found in:
Sphinx reStructuredText primer
Quick reStructuredText reference
Full reStructuredText specification
pandas has some helpers for sharing docstrings between related classes, see
Sharing docstrings.
The rest of this document will summarize all the above guidelines, and will
provide additional conventions specific to the pandas project.
Writing a docstring#
General rules#
Docstrings must be defined with three double-quotes. No blank lines should be
left before or after the docstring. The text starts in the next line after the
opening quotes. The closing quotes have their own line
(meaning that they are not at the end of the last sentence).
On rare occasions reST styles like bold text or italics will be used in
docstrings, but is it common to have inline code, which is presented between
backticks. The following are considered inline code:
The name of a parameter
Python code, a module, function, built-in, type, literal… (e.g. os,
list, numpy.abs, datetime.date, True)
A pandas class (in the form :class:`pandas.Series`)
A pandas method (in the form :meth:`pandas.Series.sum`)
A pandas function (in the form :func:`pandas.to_datetime`)
Note
To display only the last component of the linked class, method or
function, prefix it with ~. For example, :class:`~pandas.Series`
will link to pandas.Series but only display the last part, Series
as the link text. See Sphinx cross-referencing syntax
for details.
Good:
def add_values(arr):
"""
Add the values in ``arr``.
This is equivalent to Python ``sum`` of :meth:`pandas.Series.sum`.
Some sections are omitted here for simplicity.
"""
return sum(arr)
Bad:
def func():
"""Some function.
With several mistakes in the docstring.
It has a blank like after the signature ``def func():``.
The text 'Some function' should go in the line after the
opening quotes of the docstring, not in the same line.
There is a blank line between the docstring and the first line
of code ``foo = 1``.
The closing quotes should be in the next line, not in this one."""
foo = 1
bar = 2
return foo + bar
Section 1: short summary#
The short summary is a single sentence that expresses what the function does in
a concise way.
The short summary must start with a capital letter, end with a dot, and fit in
a single line. It needs to express what the object does without providing
details. For functions and methods, the short summary must start with an
infinitive verb.
Good:
def astype(dtype):
"""
Cast Series type.
This section will provide further details.
"""
pass
Bad:
def astype(dtype):
"""
Casts Series type.
Verb in third-person of the present simple, should be infinitive.
"""
pass
def astype(dtype):
"""
Method to cast Series type.
Does not start with verb.
"""
pass
def astype(dtype):
"""
Cast Series type
Missing dot at the end.
"""
pass
def astype(dtype):
"""
Cast Series type from its current type to the new type defined in
the parameter dtype.
Summary is too verbose and doesn't fit in a single line.
"""
pass
Section 2: extended summary#
The extended summary provides details on what the function does. It should not
go into the details of the parameters, or discuss implementation notes, which
go in other sections.
A blank line is left between the short summary and the extended summary.
Every paragraph in the extended summary ends with a dot.
The extended summary should provide details on why the function is useful and
their use cases, if it is not too generic.
def unstack():
"""
Pivot a row index to columns.
When using a MultiIndex, a level can be pivoted so each value in
the index becomes a column. This is especially useful when a subindex
is repeated for the main index, and data is easier to visualize as a
pivot table.
The index level will be automatically removed from the index when added
as columns.
"""
pass
Section 3: parameters#
The details of the parameters will be added in this section. This section has
the title “Parameters”, followed by a line with a hyphen under each letter of
the word “Parameters”. A blank line is left before the section title, but not
after, and not between the line with the word “Parameters” and the one with
the hyphens.
After the title, each parameter in the signature must be documented, including
*args and **kwargs, but not self.
The parameters are defined by their name, followed by a space, a colon, another
space, and the type (or types). Note that the space between the name and the
colon is important. Types are not defined for *args and **kwargs, but must
be defined for all other parameters. After the parameter definition, it is
required to have a line with the parameter description, which is indented, and
can have multiple lines. The description must start with a capital letter, and
finish with a dot.
For keyword arguments with a default value, the default will be listed after a
comma at the end of the type. The exact form of the type in this case will be
“int, default 0”. In some cases it may be useful to explain what the default
argument means, which can be added after a comma “int, default -1, meaning all
cpus”.
In cases where the default value is None, meaning that the value will not be
used. Instead of "str, default None", it is preferred to write "str, optional".
When None is a value being used, we will keep the form “str, default None”.
For example, in df.to_csv(compression=None), None is not a value being used,
but means that compression is optional, and no compression is being used if not
provided. In this case we will use "str, optional". Only in cases like
func(value=None) and None is being used in the same way as 0 or foo
would be used, then we will specify “str, int or None, default None”.
Good:
class Series:
def plot(self, kind, color='blue', **kwargs):
"""
Generate a plot.
Render the data in the Series as a matplotlib plot of the
specified kind.
Parameters
----------
kind : str
Kind of matplotlib plot.
color : str, default 'blue'
Color name or rgb code.
**kwargs
These parameters will be passed to the matplotlib plotting
function.
"""
pass
Bad:
class Series:
def plot(self, kind, **kwargs):
"""
Generate a plot.
Render the data in the Series as a matplotlib plot of the
specified kind.
Note the blank line between the parameters title and the first
parameter. Also, note that after the name of the parameter ``kind``
and before the colon, a space is missing.
Also, note that the parameter descriptions do not start with a
capital letter, and do not finish with a dot.
Finally, the ``**kwargs`` parameter is missing.
Parameters
----------
kind: str
kind of matplotlib plot
"""
pass
Parameter types#
When specifying the parameter types, Python built-in data types can be used
directly (the Python type is preferred to the more verbose string, integer,
boolean, etc):
int
float
str
bool
For complex types, define the subtypes. For dict and tuple, as more than
one type is present, we use the brackets to help read the type (curly brackets
for dict and normal brackets for tuple):
list of int
dict of {str : int}
tuple of (str, int, int)
tuple of (str,)
set of str
In case where there are just a set of values allowed, list them in curly
brackets and separated by commas (followed by a space). If the values are
ordinal and they have an order, list them in this order. Otherwise, list
the default value first, if there is one:
{0, 10, 25}
{‘simple’, ‘advanced’}
{‘low’, ‘medium’, ‘high’}
{‘cat’, ‘dog’, ‘bird’}
If the type is defined in a Python module, the module must be specified:
datetime.date
datetime.datetime
decimal.Decimal
If the type is in a package, the module must be also specified:
numpy.ndarray
scipy.sparse.coo_matrix
If the type is a pandas type, also specify pandas except for Series and
DataFrame:
Series
DataFrame
pandas.Index
pandas.Categorical
pandas.arrays.SparseArray
If the exact type is not relevant, but must be compatible with a NumPy
array, array-like can be specified. If Any type that can be iterated is
accepted, iterable can be used:
array-like
iterable
If more than one type is accepted, separate them by commas, except the
last two types, that need to be separated by the word ‘or’:
int or float
float, decimal.Decimal or None
str or list of str
If None is one of the accepted values, it always needs to be the last in
the list.
For axis, the convention is to use something like:
axis : {0 or ‘index’, 1 or ‘columns’, None}, default None
Section 4: returns or yields#
If the method returns a value, it will be documented in this section. Also
if the method yields its output.
The title of the section will be defined in the same way as the “Parameters”.
With the names “Returns” or “Yields” followed by a line with as many hyphens
as the letters in the preceding word.
The documentation of the return is also similar to the parameters. But in this
case, no name will be provided, unless the method returns or yields more than
one value (a tuple of values).
The types for “Returns” and “Yields” are the same as the ones for the
“Parameters”. Also, the description must finish with a dot.
For example, with a single value:
def sample():
"""
Generate and return a random number.
The value is sampled from a continuous uniform distribution between
0 and 1.
Returns
-------
float
Random number generated.
"""
return np.random.random()
With more than one value:
import string
def random_letters():
"""
Generate and return a sequence of random letters.
The length of the returned string is also random, and is also
returned.
Returns
-------
length : int
Length of the returned string.
letters : str
String of random letters.
"""
length = np.random.randint(1, 10)
letters = ''.join(np.random.choice(string.ascii_lowercase)
for i in range(length))
return length, letters
If the method yields its value:
def sample_values():
"""
Generate an infinite sequence of random numbers.
The values are sampled from a continuous uniform distribution between
0 and 1.
Yields
------
float
Random number generated.
"""
while True:
yield np.random.random()
Section 5: see also#
This section is used to let users know about pandas functionality
related to the one being documented. In rare cases, if no related methods
or functions can be found at all, this section can be skipped.
An obvious example would be the head() and tail() methods. As tail() does
the equivalent as head() but at the end of the Series or DataFrame
instead of at the beginning, it is good to let the users know about it.
To give an intuition on what can be considered related, here there are some
examples:
loc and iloc, as they do the same, but in one case providing indices
and in the other positions
max and min, as they do the opposite
iterrows, itertuples and items, as it is easy that a user
looking for the method to iterate over columns ends up in the method to
iterate over rows, and vice-versa
fillna and dropna, as both methods are used to handle missing values
read_csv and to_csv, as they are complementary
merge and join, as one is a generalization of the other
astype and pandas.to_datetime, as users may be reading the
documentation of astype to know how to cast as a date, and the way to do
it is with pandas.to_datetime
where is related to numpy.where, as its functionality is based on it
When deciding what is related, you should mainly use your common sense and
think about what can be useful for the users reading the documentation,
especially the less experienced ones.
When relating to other libraries (mainly numpy), use the name of the module
first (not an alias like np). If the function is in a module which is not
the main one, like scipy.sparse, list the full module (e.g.
scipy.sparse.coo_matrix).
This section has a header, “See Also” (note the capital
S and A), followed by the line with hyphens and preceded by a blank line.
After the header, we will add a line for each related method or function,
followed by a space, a colon, another space, and a short description that
illustrates what this method or function does, why is it relevant in this
context, and what the key differences are between the documented function and
the one being referenced. The description must also end with a dot.
Note that in “Returns” and “Yields”, the description is located on the line
after the type. In this section, however, it is located on the same
line, with a colon in between. If the description does not fit on the same
line, it can continue onto other lines which must be further indented.
For example:
class Series:
def head(self):
"""
Return the first 5 elements of the Series.
This function is mainly useful to preview the values of the
Series without displaying the whole of it.
Returns
-------
Series
Subset of the original series with the 5 first values.
See Also
--------
Series.tail : Return the last 5 elements of the Series.
Series.iloc : Return a slice of the elements in the Series,
which can also be used to return the first or last n.
"""
return self.iloc[:5]
Section 6: notes#
This is an optional section used for notes about the implementation of the
algorithm, or to document technical aspects of the function behavior.
Feel free to skip it, unless you are familiar with the implementation of the
algorithm, or you discover some counter-intuitive behavior while writing the
examples for the function.
This section follows the same format as the extended summary section.
Section 7: examples#
This is one of the most important sections of a docstring, despite being
placed in the last position, as often people understand concepts better
by example than through accurate explanations.
Examples in docstrings, besides illustrating the usage of the function or
method, must be valid Python code, that returns the given output in a
deterministic way, and that can be copied and run by users.
Examples are presented as a session in the Python terminal. >>> is used to
present code. ... is used for code continuing from the previous line.
Output is presented immediately after the last line of code generating the
output (no blank lines in between). Comments describing the examples can
be added with blank lines before and after them.
The way to present examples is as follows:
Import required libraries (except numpy and pandas)
Create the data required for the example
Show a very basic example that gives an idea of the most common use case
Add examples with explanations that illustrate how the parameters can be
used for extended functionality
A simple example could be:
class Series:
def head(self, n=5):
"""
Return the first elements of the Series.
This function is mainly useful to preview the values of the
Series without displaying all of it.
Parameters
----------
n : int
Number of values to return.
Return
------
pandas.Series
Subset of the original series with the n first values.
See Also
--------
tail : Return the last n elements of the Series.
Examples
--------
>>> s = pd.Series(['Ant', 'Bear', 'Cow', 'Dog', 'Falcon',
... 'Lion', 'Monkey', 'Rabbit', 'Zebra'])
>>> s.head()
0 Ant
1 Bear
2 Cow
3 Dog
4 Falcon
dtype: object
With the ``n`` parameter, we can change the number of returned rows:
>>> s.head(n=3)
0 Ant
1 Bear
2 Cow
dtype: object
"""
return self.iloc[:n]
The examples should be as concise as possible. In cases where the complexity of
the function requires long examples, is recommended to use blocks with headers
in bold. Use double star ** to make a text bold, like in **this example**.
Conventions for the examples#
Code in examples is assumed to always start with these two lines which are not
shown:
import numpy as np
import pandas as pd
Any other module used in the examples must be explicitly imported, one per line (as
recommended in PEP 8#imports)
and avoiding aliases. Avoid excessive imports, but if needed, imports from
the standard library go first, followed by third-party libraries (like
matplotlib).
When illustrating examples with a single Series use the name s, and if
illustrating with a single DataFrame use the name df. For indices,
idx is the preferred name. If a set of homogeneous Series or
DataFrame is used, name them s1, s2, s3… or df1,
df2, df3… If the data is not homogeneous, and more than one structure
is needed, name them with something meaningful, for example df_main and
df_to_join.
Data used in the example should be as compact as possible. The number of rows
is recommended to be around 4, but make it a number that makes sense for the
specific example. For example in the head method, it requires to be higher
than 5, to show the example with the default values. If doing the mean, we
could use something like [1, 2, 3], so it is easy to see that the value
returned is the mean.
For more complex examples (grouping for example), avoid using data without
interpretation, like a matrix of random numbers with columns A, B, C, D…
And instead use a meaningful example, which makes it easier to understand the
concept. Unless required by the example, use names of animals, to keep examples
consistent. And numerical properties of them.
When calling the method, keywords arguments head(n=3) are preferred to
positional arguments head(3).
Good:
class Series:
def mean(self):
"""
Compute the mean of the input.
Examples
--------
>>> s = pd.Series([1, 2, 3])
>>> s.mean()
2
"""
pass
def fillna(self, value):
"""
Replace missing values by ``value``.
Examples
--------
>>> s = pd.Series([1, np.nan, 3])
>>> s.fillna(0)
[1, 0, 3]
"""
pass
def groupby_mean(self):
"""
Group by index and return mean.
Examples
--------
>>> s = pd.Series([380., 370., 24., 26],
... name='max_speed',
... index=['falcon', 'falcon', 'parrot', 'parrot'])
>>> s.groupby_mean()
index
falcon 375.0
parrot 25.0
Name: max_speed, dtype: float64
"""
pass
def contains(self, pattern, case_sensitive=True, na=numpy.nan):
"""
Return whether each value contains ``pattern``.
In this case, we are illustrating how to use sections, even
if the example is simple enough and does not require them.
Examples
--------
>>> s = pd.Series('Antelope', 'Lion', 'Zebra', np.nan)
>>> s.contains(pattern='a')
0 False
1 False
2 True
3 NaN
dtype: bool
**Case sensitivity**
With ``case_sensitive`` set to ``False`` we can match ``a`` with both
``a`` and ``A``:
>>> s.contains(pattern='a', case_sensitive=False)
0 True
1 False
2 True
3 NaN
dtype: bool
**Missing values**
We can fill missing values in the output using the ``na`` parameter:
>>> s.contains(pattern='a', na=False)
0 False
1 False
2 True
3 False
dtype: bool
"""
pass
Bad:
def method(foo=None, bar=None):
"""
A sample DataFrame method.
Do not import NumPy and pandas.
Try to use meaningful data, when it makes the example easier
to understand.
Try to avoid positional arguments like in ``df.method(1)``. They
can be all right if previously defined with a meaningful name,
like in ``present_value(interest_rate)``, but avoid them otherwise.
When presenting the behavior with different parameters, do not place
all the calls one next to the other. Instead, add a short sentence
explaining what the example shows.
Examples
--------
>>> import numpy as np
>>> import pandas as pd
>>> df = pd.DataFrame(np.random.randn(3, 3),
... columns=('a', 'b', 'c'))
>>> df.method(1)
21
>>> df.method(bar=14)
123
"""
pass
Tips for getting your examples pass the doctests#
Getting the examples pass the doctests in the validation script can sometimes
be tricky. Here are some attention points:
Import all needed libraries (except for pandas and NumPy, those are already
imported as import pandas as pd and import numpy as np) and define
all variables you use in the example.
Try to avoid using random data. However random data might be OK in some
cases, like if the function you are documenting deals with probability
distributions, or if the amount of data needed to make the function result
meaningful is too much, such that creating it manually is very cumbersome.
In those cases, always use a fixed random seed to make the generated examples
predictable. Example:
>>> np.random.seed(42)
>>> df = pd.DataFrame({'normal': np.random.normal(100, 5, 20)})
If you have a code snippet that wraps multiple lines, you need to use ‘…’
on the continued lines:
>>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], index=['a', 'b', 'c'],
... columns=['A', 'B'])
If you want to show a case where an exception is raised, you can do:
>>> pd.to_datetime(["712-01-01"])
Traceback (most recent call last):
OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 712-01-01 00:00:00
It is essential to include the “Traceback (most recent call last):”, but for
the actual error only the error name is sufficient.
If there is a small part of the result that can vary (e.g. a hash in an object
representation), you can use ... to represent this part.
If you want to show that s.plot() returns a matplotlib AxesSubplot object,
this will fail the doctest
>>> s.plot()
<matplotlib.axes._subplots.AxesSubplot at 0x7efd0c0b0690>
However, you can do (notice the comment that needs to be added)
>>> s.plot()
<matplotlib.axes._subplots.AxesSubplot at ...>
Plots in examples#
There are some methods in pandas returning plots. To render the plots generated
by the examples in the documentation, the .. plot:: directive exists.
To use it, place the next code after the “Examples” header as shown below. The
plot will be generated automatically when building the documentation.
class Series:
def plot(self):
"""
Generate a plot with the ``Series`` data.
Examples
--------
.. plot::
:context: close-figs
>>> s = pd.Series([1, 2, 3])
>>> s.plot()
"""
pass
Sharing docstrings#
pandas has a system for sharing docstrings, with slight variations, between
classes. This helps us keep docstrings consistent, while keeping things clear
for the user reading. It comes at the cost of some complexity when writing.
Each shared docstring will have a base template with variables, like
{klass}. The variables filled in later on using the doc decorator.
Finally, docstrings can also be appended to with the doc decorator.
In this example, we’ll create a parent docstring normally (this is like
pandas.core.generic.NDFrame. Then we’ll have two children (like
pandas.core.series.Series and pandas.core.frame.DataFrame). We’ll
substitute the class names in this docstring.
class Parent:
@doc(klass="Parent")
def my_function(self):
"""Apply my function to {klass}."""
...
class ChildA(Parent):
@doc(Parent.my_function, klass="ChildA")
def my_function(self):
...
class ChildB(Parent):
@doc(Parent.my_function, klass="ChildB")
def my_function(self):
...
The resulting docstrings are
>>> print(Parent.my_function.__doc__)
Apply my function to Parent.
>>> print(ChildA.my_function.__doc__)
Apply my function to ChildA.
>>> print(ChildB.my_function.__doc__)
Apply my function to ChildB.
Notice:
We “append” the parent docstring to the children docstrings, which are
initially empty.
Our files will often contain a module-level _shared_doc_kwargs with some
common substitution values (things like klass, axes, etc).
You can substitute and append in one shot with something like
@doc(template, **_shared_doc_kwargs)
def my_function(self):
...
where template may come from a module-level _shared_docs dictionary
mapping function names to docstrings. Wherever possible, we prefer using
doc, since the docstring-writing processes is slightly closer to normal.
See pandas.core.generic.NDFrame.fillna for an example template, and
pandas.core.series.Series.fillna and pandas.core.generic.frame.fillna
for the filled versions.
| 853
| 1,200
|
Rules in String into in Pandas
I would like to use Pandas to read simple rules in string.
I write this and it's worked :
import numpy as np
import pandas as pd
d = {'col1': [1, 2], 'col2': [3, 4]}
df = pd.DataFrame(data=d)
a = "(df['col1'] > 1) & (df['col2'] > 3)"
df[eval(a)]
But, I would like to remove df[] and '()' in my string value.
I don't know if it's possible, or maybe another way exists to write this.
Thank you
|
59,723,055
|
How can I print the phrase as the randomly generated numbers?
|
<p>Code:</p>
<pre><code>import string, random
import pandas as pd
#User Input
title = input("Please Enter A Title For This Puzzle: ")
if len(title) == 0:
print("String is empty")
quit()
phrase = input("Please Enter A Phrase To Be Encoded: ")
if len(phrase) == 0:
print("String is empty")
quit()
#Numbers get assigned to the letters
nums = random.sample(range(1, 27), 26)
code = dict(zip(nums, string.ascii_lowercase))
#'Start' of Puzzle for the user
print ("The Title Of This Puzzle Is", title)
#Code for Grid
code2 = {'Number': [[nums[0]],[nums[1]],[nums[2]],[nums[3]],[nums[4]],[nums[5]],[nums[6]],[nums[7]],[nums[8]],[nums[9]],[nums[10]],[nums[11]],[nums[12]],[nums[13]],[nums[14]],[nums[15]],[nums[16]],[nums[17]],[nums[18]],[nums[19]],[nums[20]],[nums[21]],[nums[22]],[nums[23]],[nums[24]],[nums[25]]],
'Letter': list(string.ascii_lowercase),
}
df = pd.DataFrame(code2, columns = ['Number', 'Letter'])
print (df)
</code></pre>
<p>What I am currently trying to do is take the phrase that has been entered and spit out the randomly generated numbers, under the #Numbers Get Assigned To The Letter comment, instead.<br>
E.g: Hello = 20-1-16-16-19 </p>
<p>I have already tried but the code im writing is nonsense and would like some outside help </p>
<p>Thank you </p>
| 59,723,272
| 2020-01-13T19:34:56.027000
| 2
| null | 0
| 52
|
python|pandas
|
<p>If I understand you correctly, is it something like this you were looking for?</p>
<p>flip the nums position</p>
<pre><code>code = dict(zip(string.ascii_lowercase, nums))
code.update({" ":0})
HELLO = [code[item] for item in "hello"]
</code></pre>
| 2020-01-13T19:54:21.560000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sample.html
|
pandas.DataFrame.sample#
pandas.DataFrame.sample#
DataFrame.sample(n=None, frac=None, replace=False, weights=None, random_state=None, axis=None, ignore_index=False)[source]#
Return a random sample of items from an axis of object.
You can use random_state for reproducibility.
Parameters
nint, optionalNumber of items from axis to return. Cannot be used with frac.
Default = 1 if frac = None.
fracfloat, optionalFraction of axis items to return. Cannot be used with n.
replacebool, default FalseAllow or disallow sampling of the same row more than once.
weightsstr or ndarray-like, optionalDefault ‘None’ results in equal probability weighting.
If passed a Series, will align with target object on index. Index
values in weights not found in sampled object will be ignored and
index values in sampled object not in weights will be assigned
weights of zero.
If called on a DataFrame, will accept the name of a column
If I understand you correctly, is it something like this you were looking for?
flip the nums position
code = dict(zip(string.ascii_lowercase, nums))
code.update({" ":0})
HELLO = [code[item] for item in "hello"]
when axis = 0.
Unless weights are a Series, weights must be same length as axis
being sampled.
If weights do not sum to 1, they will be normalized to sum to 1.
Missing values in the weights column will be treated as zero.
Infinite values not allowed.
random_stateint, array-like, BitGenerator, np.random.RandomState, np.random.Generator, optionalIf int, array-like, or BitGenerator, seed for random number generator.
If np.random.RandomState or np.random.Generator, use as given.
Changed in version 1.1.0: array-like and BitGenerator object now passed to np.random.RandomState()
as seed
Changed in version 1.4.0: np.random.Generator objects now accepted
axis{0 or ‘index’, 1 or ‘columns’, None}, default NoneAxis to sample. Accepts axis number or name. Default is stat axis
for given data type. For Series this parameter is unused and defaults to None.
ignore_indexbool, default FalseIf True, the resulting index will be labeled 0, 1, …, n - 1.
New in version 1.3.0.
Returns
Series or DataFrameA new object of same type as caller containing n items randomly
sampled from the caller object.
See also
DataFrameGroupBy.sampleGenerates random samples from each group of a DataFrame object.
SeriesGroupBy.sampleGenerates random samples from each group of a Series object.
numpy.random.choiceGenerates a random sample from a given 1-D numpy array.
Notes
If frac > 1, replacement should be set to True.
Examples
>>> df = pd.DataFrame({'num_legs': [2, 4, 8, 0],
... 'num_wings': [2, 0, 0, 0],
... 'num_specimen_seen': [10, 2, 1, 8]},
... index=['falcon', 'dog', 'spider', 'fish'])
>>> df
num_legs num_wings num_specimen_seen
falcon 2 2 10
dog 4 0 2
spider 8 0 1
fish 0 0 8
Extract 3 random elements from the Series df['num_legs']:
Note that we use random_state to ensure the reproducibility of
the examples.
>>> df['num_legs'].sample(n=3, random_state=1)
fish 0
spider 8
falcon 2
Name: num_legs, dtype: int64
A random 50% sample of the DataFrame with replacement:
>>> df.sample(frac=0.5, replace=True, random_state=1)
num_legs num_wings num_specimen_seen
dog 4 0 2
fish 0 0 8
An upsample sample of the DataFrame with replacement:
Note that replace parameter has to be True for frac parameter > 1.
>>> df.sample(frac=2, replace=True, random_state=1)
num_legs num_wings num_specimen_seen
dog 4 0 2
fish 0 0 8
falcon 2 2 10
falcon 2 2 10
fish 0 0 8
dog 4 0 2
fish 0 0 8
dog 4 0 2
Using a DataFrame column as weights. Rows with larger value in the
num_specimen_seen column are more likely to be sampled.
>>> df.sample(n=2, weights='num_specimen_seen', random_state=1)
num_legs num_wings num_specimen_seen
falcon 2 2 10
fish 0 0 8
| 924
| 1,135
|
How can I print the phrase as the randomly generated numbers?
Code:
import string, random
import pandas as pd
#User Input
title = input("Please Enter A Title For This Puzzle: ")
if len(title) == 0:
print("String is empty")
quit()
phrase = input("Please Enter A Phrase To Be Encoded: ")
if len(phrase) == 0:
print("String is empty")
quit()
#Numbers get assigned to the letters
nums = random.sample(range(1, 27), 26)
code = dict(zip(nums, string.ascii_lowercase))
#'Start' of Puzzle for the user
print ("The Title Of This Puzzle Is", title)
#Code for Grid
code2 = {'Number': [[nums[0]],[nums[1]],[nums[2]],[nums[3]],[nums[4]],[nums[5]],[nums[6]],[nums[7]],[nums[8]],[nums[9]],[nums[10]],[nums[11]],[nums[12]],[nums[13]],[nums[14]],[nums[15]],[nums[16]],[nums[17]],[nums[18]],[nums[19]],[nums[20]],[nums[21]],[nums[22]],[nums[23]],[nums[24]],[nums[25]]],
'Letter': list(string.ascii_lowercase),
}
df = pd.DataFrame(code2, columns = ['Number', 'Letter'])
print (df)
What I am currently trying to do is take the phrase that has been entered and spit out the randomly generated numbers, under the #Numbers Get Assigned To The Letter comment, instead.
E.g: Hello = 20-1-16-16-19
I have already tried but the code im writing is nonsense and would like some outside help
Thank you
|
62,179,877
|
Matching column values from two different dataframes and pairing observations
|
<p>So I have no idea how to this, and looking around for half a day I have not found my solution.</p>
<p>My data looks something like this</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame(
[['132','233','472','098'], ['482','214','980',''], ['107','','',''],
['571','498','',''],], columns=["p1", "p2", "p3", "p4"])
df2 = pd.DataFrame(['532','233','980','132', '298'], columns=["p"])
</code></pre>
<pre><code>df1
p1 p2 p3
0 132 233 472
1 482 214 980
2 107
3 571 498
df2
p
0 532
1 233
2 980
3 132
4 298
</code></pre>
<p>I wish to match the values in the p column with any one of the values in the p{1-3} columns, and create a new column which contains the matched string.</p>
<p>So in this instance my desired output is </p>
<pre><code>df_output
p1 p2 p3 matched_p
0 132 233 472 233
1 482 214 980 980
2 107
3 571 498
</code></pre>
<p>I tried the following</p>
<pre><code>filter1 = df1['p1'].isin(df2['p'])
filter2 = df1['p2'].isin(df2['p'])
filter3 = df1['p3'].isin(df2['p'])
df1['matched_p'] = df2['p'][filter1 | filter2 | filter3]
</code></pre>
<p>however, this gave me non-sensical results.</p>
<p>Any ideas on how to approach this problem?</p>
| 62,180,339
| 2020-06-03T18:15:00.420000
| 3
| null | 2
| 53
|
python|pandas
|
<p>You can try this. Using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.isin.html" rel="nofollow noreferrer"><code>df.isin</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.where.html" rel="nofollow noreferrer"><code>df.where</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.max.html" rel="nofollow noreferrer"><code>df.max</code></a> over axis 1. </p>
<pre><code>df1 = df1.replace('',np.nan).astype(float) # to convert everything to float.
df2 = df2.astype(float) #to convert everything to float.
m = df1.isin(df2['p'].to_numpy())
df1['matched_values'] = df1.where(m,0).max(1)
df1
p1 p2 p3 p4 matched_values
0 132.0 233.0 472.0 98.0 233.0
1 482.0 214.0 980.0 NaN 980.0
2 107.0 NaN NaN NaN NaN
3 571.0 498.0 NaN NaN NaN
</code></pre>
<p>If you don't want to convert your dtypes to <code>float</code>.</p>
<p>Inspired from <a href="https://stackoverflow.com/a/62180368/12416453">@Erfan's</a> solution. I combined our approaches.</p>
<pre><code>df1['matched'] = (df1.where(
df1.isin(df2['p'].to_numpy()),'').
add(',').sum(1).str.strip(','))
</code></pre>
| 2020-06-03T18:40:40.923000
| 2
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
You can try this. Using df.isin and df.where with df.max over axis 1.
df1 = df1.replace('',np.nan).astype(float) # to convert everything to float.
df2 = df2.astype(float) #to convert everything to float.
m = df1.isin(df2['p'].to_numpy())
df1['matched_values'] = df1.where(m,0).max(1)
df1
p1 p2 p3 p4 matched_values
0 132.0 233.0 472.0 98.0 233.0
1 482.0 214.0 980.0 NaN 980.0
2 107.0 NaN NaN NaN NaN
3 571.0 498.0 NaN NaN NaN
If you don't want to convert your dtypes to float.
Inspired from @Erfan's solution. I combined our approaches.
df1['matched'] = (df1.where(
df1.isin(df2['p'].to_numpy()),'').
add(',').sum(1).str.strip(','))
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 64
| 823
|
Matching column values from two different dataframes and pairing observations
So I have no idea how to this, and looking around for half a day I have not found my solution.
My data looks something like this
import pandas as pd
df1 = pd.DataFrame(
[['132','233','472','098'], ['482','214','980',''], ['107','','',''],
['571','498','',''],], columns=["p1", "p2", "p3", "p4"])
df2 = pd.DataFrame(['532','233','980','132', '298'], columns=["p"])
df1
p1 p2 p3
0 132 233 472
1 482 214 980
2 107
3 571 498
df2
p
0 532
1 233
2 980
3 132
4 298
I wish to match the values in the p column with any one of the values in the p{1-3} columns, and create a new column which contains the matched string.
So in this instance my desired output is
df_output
p1 p2 p3 matched_p
0 132 233 472 233
1 482 214 980 980
2 107
3 571 498
I tried the following
filter1 = df1['p1'].isin(df2['p'])
filter2 = df1['p2'].isin(df2['p'])
filter3 = df1['p3'].isin(df2['p'])
df1['matched_p'] = df2['p'][filter1 | filter2 | filter3]
however, this gave me non-sensical results.
Any ideas on how to approach this problem?
|
69,306,507
|
Creating multiple additional calculated columns to a dataframe in pandas using a for statement
|
<p>I am trying to create an additional column to my dataframe using a loop, where the additional columns would be a multiple of a current column, but that multiple will change. I understand that this can be solved relatively easily by just creating a new column for each, but this is part of a larger project that I cant do that for.</p>
<p>Starting with this dataframe:</p>
<pre><code> Time Amount
0 20 10
1 10 5
2 15 25
</code></pre>
<p>Hoping for the following outcome:</p>
<pre><code> Time Amount Amount i=2 Amount i=3 Amount i=4
0 20 10 20 30 40
1 10 5 10 15 20
2 15 25 50 75 75
</code></pre>
<p>Think there should be an easy answer, but cant find anything online. So far I have this:</p>
<pre><code>data = {'Time': [20,10,15],
'Amount': [10,5,25]}
df = pd.DataFrame(data)
for i in range(2,5):
df = df.append(df['Amount']*i)
print(df)
</code></pre>
<p>Thanks</p>
| 69,306,622
| 2021-09-23T20:24:41.697000
| 3
| 0
| 0
| 53
|
python|pandas
|
<p>Do you want something like this ?</p>
<pre><code>for i in range(2,5):
df["Amout i={}".format(i)] = df['Amount']*i
</code></pre>
<p>Output :</p>
<pre><code> Time Amount Amout i=2 Amout i=3 Amout i=4
0 20 10 20 30 40
1 10 5 10 15 20
2 15 25 50 75 100
</code></pre>
| 2021-09-23T20:35:09.623000
| 2
|
https://pandas.pydata.org/docs/getting_started/intro_tutorials/05_add_columns.html
|
How to create new columns derived from existing columns?#
In [1]: import pandas as pd
Data used for this tutorial:
Air quality data
For this tutorial, air quality data about \(NO_2\) is used, made
available by OpenAQ and using the
py-openaq package.
The air_quality_no2.csv data set provides \(NO_2\) values for
the measurement stations FR04014, BETR801 and London Westminster
in respectively Paris, Antwerp and London.
To raw data
In [2]: air_quality = pd.read_csv("data/air_quality_no2.csv", index_col=0, parse_dates=True)
In [3]: air_quality.head()
Out[3]:
station_antwerp station_paris station_london
datetime
Do you want something like this ?
for i in range(2,5):
df["Amout i={}".format(i)] = df['Amount']*i
Output :
Time Amount Amout i=2 Amout i=3 Amout i=4
0 20 10 20 30 40
1 10 5 10 15 20
2 15 25 50 75 100
2019-05-07 02:00:00 NaN NaN 23.0
2019-05-07 03:00:00 50.5 25.0 19.0
2019-05-07 04:00:00 45.0 27.7 19.0
2019-05-07 05:00:00 NaN 50.4 16.0
2019-05-07 06:00:00 NaN 61.9 NaN
How to create new columns derived from existing columns?#
I want to express the \(NO_2\) concentration of the station in London in mg/m\(^3\).
(If we assume temperature of 25 degrees Celsius and pressure of 1013
hPa, the conversion factor is 1.882)
In [4]: air_quality["london_mg_per_cubic"] = air_quality["station_london"] * 1.882
In [5]: air_quality.head()
Out[5]:
station_antwerp ... london_mg_per_cubic
datetime ...
2019-05-07 02:00:00 NaN ... 43.286
2019-05-07 03:00:00 50.5 ... 35.758
2019-05-07 04:00:00 45.0 ... 35.758
2019-05-07 05:00:00 NaN ... 30.112
2019-05-07 06:00:00 NaN ... NaN
[5 rows x 4 columns]
To create a new column, use the [] brackets with the new column name
at the left side of the assignment.
Note
The calculation of the values is done element-wise. This
means all values in the given column are multiplied by the value 1.882
at once. You do not need to use a loop to iterate each of the rows!
I want to check the ratio of the values in Paris versus Antwerp and save the result in a new column.
In [6]: air_quality["ratio_paris_antwerp"] = (
...: air_quality["station_paris"] / air_quality["station_antwerp"]
...: )
...:
In [7]: air_quality.head()
Out[7]:
station_antwerp ... ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN ... NaN
2019-05-07 03:00:00 50.5 ... 0.495050
2019-05-07 04:00:00 45.0 ... 0.615556
2019-05-07 05:00:00 NaN ... NaN
2019-05-07 06:00:00 NaN ... NaN
[5 rows x 5 columns]
The calculation is again element-wise, so the / is applied for the
values in each row.
Also other mathematical operators (+, -, *, /,…) or
logical operators (<, >, ==,…) work element-wise. The latter was already
used in the subset data tutorial to filter
rows of a table using a conditional expression.
If you need more advanced logic, you can use arbitrary Python code via apply().
I want to rename the data columns to the corresponding station identifiers used by OpenAQ.
In [8]: air_quality_renamed = air_quality.rename(
...: columns={
...: "station_antwerp": "BETR801",
...: "station_paris": "FR04014",
...: "station_london": "London Westminster",
...: }
...: )
...:
In [9]: air_quality_renamed.head()
Out[9]:
BETR801 FR04014 ... london_mg_per_cubic ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN NaN ... 43.286 NaN
2019-05-07 03:00:00 50.5 25.0 ... 35.758 0.495050
2019-05-07 04:00:00 45.0 27.7 ... 35.758 0.615556
2019-05-07 05:00:00 NaN 50.4 ... 30.112 NaN
2019-05-07 06:00:00 NaN 61.9 ... NaN NaN
[5 rows x 5 columns]
The rename() function can be used for both row labels and column
labels. Provide a dictionary with the keys the current names and the
values the new names to update the corresponding names.
The mapping should not be restricted to fixed names only, but can be a
mapping function as well. For example, converting the column names to
lowercase letters can be done using a function as well:
In [10]: air_quality_renamed = air_quality_renamed.rename(columns=str.lower)
In [11]: air_quality_renamed.head()
Out[11]:
betr801 fr04014 ... london_mg_per_cubic ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN NaN ... 43.286 NaN
2019-05-07 03:00:00 50.5 25.0 ... 35.758 0.495050
2019-05-07 04:00:00 45.0 27.7 ... 35.758 0.615556
2019-05-07 05:00:00 NaN 50.4 ... 30.112 NaN
2019-05-07 06:00:00 NaN 61.9 ... NaN NaN
[5 rows x 5 columns]
To user guideDetails about column or row label renaming is provided in the user guide section on renaming labels.
REMEMBER
Create a new column by assigning the output to the DataFrame with a
new column name in between the [].
Operations are element-wise, no need to loop over rows.
Use rename with a dictionary or function to rename row labels or
column names.
To user guideThe user guide contains a separate section on column addition and deletion.
| 713
| 1,024
|
Creating multiple additional calculated columns to a dataframe in pandas using a for statement
I am trying to create an additional column to my dataframe using a loop, where the additional columns would be a multiple of a current column, but that multiple will change. I understand that this can be solved relatively easily by just creating a new column for each, but this is part of a larger project that I cant do that for.
Starting with this dataframe:
Time Amount
0 20 10
1 10 5
2 15 25
Hoping for the following outcome:
Time Amount Amount i=2 Amount i=3 Amount i=4
0 20 10 20 30 40
1 10 5 10 15 20
2 15 25 50 75 75
Think there should be an easy answer, but cant find anything online. So far I have this:
data = {'Time': [20,10,15],
'Amount': [10,5,25]}
df = pd.DataFrame(data)
for i in range(2,5):
df = df.append(df['Amount']*i)
print(df)
Thanks
|
65,608,095
|
how to count the number of rows following a particular values in the same column in a dataframe
|
<p>Consider I have the following dataframe:</p>
<pre><code>tempDic = {0: {0: 'class([1,0,0,0],"Small-molecule metabolism ").', 1: 'function(tb186,[1,1,1,0],\'bglS\',"beta-glucosidase").', 2: 'function(tb2202,[1,1,1,0],\'cbhK\',"carbohydrate kinase").', 3: 'function(tb727,[1,1,1,0],\'fucA\',"L-fuculose phosphate aldolase").', 4: 'function(tb1731,[1,1,1,0],\'gabD1\',"succinate-semialdehyde dehydrogenase").', 5: 'function(tb234,[1,1,1,0],\'gabD2\',"succinate-semialdehyde dehydrogenase").', 6: 'class([1,1,0,0],"Degradation ").', 7: 'function(tb501,[1,1,1,0],\'galE1\',"UDP-glucose 4-epimerase").', 8: 'function(tb536,[1,1,1,0],\'galE2\',"UDP-glucose 4-epimerase").', 9: 'function(tb620,[1,1,1,0],\'galK\',"galactokinase").', 10: 'function(tb619,[1,1,1,0],\'galT\',"galactose-1-phosphate uridylyltransferase C-term").', 11: 'class([1,1,1,0],"Carbon compounds ").', 12: 'function(tb186,[1,1,1,0],\'bglS\',"beta-glucosidase").', 13: 'function(tb2202,[1,1,1,0],\'cbhK\',"carbohydrate kinase").', 14: 'function(tb727,[1,1,1,0],\'fucA\',"L-fuculose phosphate aldolase").', 15: 'function(tb1731,[1,1,1,0],\'gabD1\',"succinate-semialdehyde dehydrogenase").', 16: 'function(tb234,[1,1,1,0],\'gabD2\',"succinate-semialdehyde dehydrogenase").', 17: 'function(tb501,[1,1,1,0],\'galE1\',"UDP-glucose 4-epimerase").', 18: 'class([1,1,1,0],"xyz ").'}}
df = pd.DataFrame(tempDic)
print(df)
0
0 class([1,0,0,0],"Small-molecule metabolism ").
1 function(tb186,[1,1,1,0],'bglS',"beta-glucosid...
2 function(tb2202,[1,1,1,0],'cbhK',"carbohydrate...
3 function(tb727,[1,1,1,0],'fucA',"L-fuculose ph...
4 function(tb1731,[1,1,1,0],'gabD1',"succinate-s...
5 function(tb234,[1,1,1,0],'gabD2',"succinate-se...
6 class([1,1,0,0],"Degradation ").
7 function(tb501,[1,1,1,0],'galE1',"UDP-glucose ...
8 function(tb536,[1,1,1,0],'galE2',"UDP-glucose ...
9 function(tb620,[1,1,1,0],'galK',"galactokinase").
10 function(tb619,[1,1,1,0],'galT',"galactose-1-p...
11 class([1,1,1,0],"Carbon compounds ").
12 function(tb186,[1,1,1,0],'bglS',"beta-glucosid...
13 function(tb2202,[1,1,1,0],'cbhK',"carbohydrate...
14 function(tb727,[1,1,1,0],'fucA',"L-fuculose ph...
15 function(tb1731,[1,1,1,0],'gabD1',"succinate-s...
16 function(tb234,[1,1,1,0],'gabD2',"succinate-se...
17 function(tb501,[1,1,1,0],'galE1',"UDP-glucose ...
18 class([1,1,1,0],"xyz ").
</code></pre>
<p>What I need is a strategy that will give me a result like this:</p>
<pre><code>Class Count
Small-molecule metabolism 5
Degradation 4
Carbon compounds 6
xyz 0
</code></pre>
<p>Each row that starts with "class" contains the name of the class in double quotes, for example, "Small-molecule metabolism" in the first row. This row is then followed by rows starting with "function". We just need to count those rows that start with "function" and put that count in front of that class name.
A class that is not followed by "function" rows should be assigned the value of 0, meaning that the class has zero functions.</p>
| 65,608,289
| 2021-01-07T07:18:34.723000
| 3
| null | 2
| 54
|
python|pandas
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.startswith.html" rel="nofollow noreferrer"><code>Series.str.startswith</code></a> for mask, get values between <code>""</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>Series.str.extract</code></a> and after forward filling missing values use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>GroupBy.size</code></a> with subtract <code>1</code>:</p>
<pre><code>df['Class'] = df.loc[df[0].str.startswith('class'), 0].str.extract('"(.+)"', expand=False)
df['Class'] = df['Class'].ffill()
s = df.groupby('Class', sort=False).size().sub(1).reset_index(name='Count')
print (s)
Class Count
0 Small-molecule metabolism 5
1 Degradation 4
2 Carbon compounds 6
3 xyz 0
</code></pre>
<p>Details of steps:</p>
<pre><code>print(df.loc[df[0].str.startswith('class'), 0])
0 class([1,0,0,0],"Small-molecule metabolism ").
6 class([1,1,0,0],"Degradation ").
11 class([1,1,1,0],"Carbon compounds ").
18 class([1,1,1,0],"xyz ").
Name: 0, dtype: object
print (df.loc[df[0].str.startswith('class'), 0].str.extract('"(.+)"', expand=False))
0 Small-molecule metabolism
6 Degradation
11 Carbon compounds
18 xyz
Name: 0, dtype: object
</code></pre>
<hr />
<pre><code>df['Class'] = df.loc[df[0].str.startswith('class'), 0].str.extract('"(.+)"', expand=False)
print (df['Class'])
0 Small-molecule metabolism
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 Degradation
7 NaN
8 NaN
9 NaN
10 NaN
11 Carbon compounds
12 NaN
13 NaN
14 NaN
15 NaN
16 NaN
17 NaN
18 xyz
Name: Class, dtype: object
</code></pre>
<hr />
<pre><code>df['Class'] = df['Class'].ffill()
print (df['Class'])
0 Small-molecule metabolism
1 Small-molecule metabolism
2 Small-molecule metabolism
3 Small-molecule metabolism
4 Small-molecule metabolism
5 Small-molecule metabolism
6 Degradation
7 Degradation
8 Degradation
9 Degradation
10 Degradation
11 Carbon compounds
12 Carbon compounds
13 Carbon compounds
14 Carbon compounds
15 Carbon compounds
16 Carbon compounds
17 Carbon compounds
18 xyz
Name: Class, dtype: object
</code></pre>
<hr />
<pre><code>print (df.groupby('Class', sort=False).size())
Class
Small-molecule metabolism 6
Degradation 5
Carbon compounds 7
xyz 1
dtype: int64
df1 = df.groupby('Class', sort=False).size().sub(1).reset_index(name='Count')
print (df1)
Class Count
0 Small-molecule metabolism 5
1 Degradation 4
2 Carbon compounds 6
3 xyz 0
</code></pre>
| 2021-01-07T07:36:07.433000
| 2
|
https://pandas.pydata.org/docs/getting_started/intro_tutorials/06_calculate_statistics.html
|
Use Series.str.startswith for mask, get values between "" by Series.str.extract and after forward filling missing values use GroupBy.size with subtract 1:
df['Class'] = df.loc[df[0].str.startswith('class'), 0].str.extract('"(.+)"', expand=False)
df['Class'] = df['Class'].ffill()
s = df.groupby('Class', sort=False).size().sub(1).reset_index(name='Count')
print (s)
Class Count
0 Small-molecule metabolism 5
1 Degradation 4
2 Carbon compounds 6
3 xyz 0
Details of steps:
print(df.loc[df[0].str.startswith('class'), 0])
0 class([1,0,0,0],"Small-molecule metabolism ").
6 class([1,1,0,0],"Degradation ").
11 class([1,1,1,0],"Carbon compounds ").
18 class([1,1,1,0],"xyz ").
Name: 0, dtype: object
print (df.loc[df[0].str.startswith('class'), 0].str.extract('"(.+)"', expand=False))
0 Small-molecule metabolism
6 Degradation
11 Carbon compounds
18 xyz
Name: 0, dtype: object
df['Class'] = df.loc[df[0].str.startswith('class'), 0].str.extract('"(.+)"', expand=False)
print (df['Class'])
0 Small-molecule metabolism
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 Degradation
7 NaN
8 NaN
9 NaN
10 NaN
11 Carbon compounds
12 NaN
13 NaN
14 NaN
15 NaN
16 NaN
17 NaN
18 xyz
Name: Class, dtype: object
df['Class'] = df['Class'].ffill()
print (df['Class'])
0 Small-molecule metabolism
1 Small-molecule metabolism
2 Small-molecule metabolism
3 Small-molecule metabolism
4 Small-molecule metabolism
5 Small-molecule metabolism
6 Degradation
7 Degradation
8 Degradation
9 Degradation
10 Degradation
11 Carbon compounds
12 Carbon compounds
13 Carbon compounds
14 Carbon compounds
15 Carbon compounds
16 Carbon compounds
17 Carbon compounds
18 xyz
Name: Class, dtype: object
print (df.groupby('Class', sort=False).size())
Class
Small-molecule metabolism 6
Degradation 5
Carbon compounds 7
xyz 1
dtype: int64
df1 = df.groupby('Class', sort=False).size().sub(1).reset_index(name='Count')
print (df1)
Class Count
0 Small-molecule metabolism 5
1 Degradation 4
2 Carbon compounds 6
3 xyz 0
| 0
| 3,048
|
how to count the number of rows following a particular values in the same column in a dataframe
Consider I have the following dataframe:
tempDic = {0: {0: 'class([1,0,0,0],"Small-molecule metabolism ").', 1: 'function(tb186,[1,1,1,0],\'bglS\',"beta-glucosidase").', 2: 'function(tb2202,[1,1,1,0],\'cbhK\',"carbohydrate kinase").', 3: 'function(tb727,[1,1,1,0],\'fucA\',"L-fuculose phosphate aldolase").', 4: 'function(tb1731,[1,1,1,0],\'gabD1\',"succinate-semialdehyde dehydrogenase").', 5: 'function(tb234,[1,1,1,0],\'gabD2\',"succinate-semialdehyde dehydrogenase").', 6: 'class([1,1,0,0],"Degradation ").', 7: 'function(tb501,[1,1,1,0],\'galE1\',"UDP-glucose 4-epimerase").', 8: 'function(tb536,[1,1,1,0],\'galE2\',"UDP-glucose 4-epimerase").', 9: 'function(tb620,[1,1,1,0],\'galK\',"galactokinase").', 10: 'function(tb619,[1,1,1,0],\'galT\',"galactose-1-phosphate uridylyltransferase C-term").', 11: 'class([1,1,1,0],"Carbon compounds ").', 12: 'function(tb186,[1,1,1,0],\'bglS\',"beta-glucosidase").', 13: 'function(tb2202,[1,1,1,0],\'cbhK\',"carbohydrate kinase").', 14: 'function(tb727,[1,1,1,0],\'fucA\',"L-fuculose phosphate aldolase").', 15: 'function(tb1731,[1,1,1,0],\'gabD1\',"succinate-semialdehyde dehydrogenase").', 16: 'function(tb234,[1,1,1,0],\'gabD2\',"succinate-semialdehyde dehydrogenase").', 17: 'function(tb501,[1,1,1,0],\'galE1\',"UDP-glucose 4-epimerase").', 18: 'class([1,1,1,0],"xyz ").'}}
df = pd.DataFrame(tempDic)
print(df)
0
0 class([1,0,0,0],"Small-molecule metabolism ").
1 function(tb186,[1,1,1,0],'bglS',"beta-glucosid...
2 function(tb2202,[1,1,1,0],'cbhK',"carbohydrate...
3 function(tb727,[1,1,1,0],'fucA',"L-fuculose ph...
4 function(tb1731,[1,1,1,0],'gabD1',"succinate-s...
5 function(tb234,[1,1,1,0],'gabD2',"succinate-se...
6 class([1,1,0,0],"Degradation ").
7 function(tb501,[1,1,1,0],'galE1',"UDP-glucose ...
8 function(tb536,[1,1,1,0],'galE2',"UDP-glucose ...
9 function(tb620,[1,1,1,0],'galK',"galactokinase").
10 function(tb619,[1,1,1,0],'galT',"galactose-1-p...
11 class([1,1,1,0],"Carbon compounds ").
12 function(tb186,[1,1,1,0],'bglS',"beta-glucosid...
13 function(tb2202,[1,1,1,0],'cbhK',"carbohydrate...
14 function(tb727,[1,1,1,0],'fucA',"L-fuculose ph...
15 function(tb1731,[1,1,1,0],'gabD1',"succinate-s...
16 function(tb234,[1,1,1,0],'gabD2',"succinate-se...
17 function(tb501,[1,1,1,0],'galE1',"UDP-glucose ...
18 class([1,1,1,0],"xyz ").
What I need is a strategy that will give me a result like this:
Class Count
Small-molecule metabolism 5
Degradation 4
Carbon compounds 6
xyz 0
Each row that starts with "class" contains the name of the class in double quotes, for example, "Small-molecule metabolism" in the first row. This row is then followed by rows starting with "function". We just need to count those rows that start with "function" and put that count in front of that class name.
A class that is not followed by "function" rows should be assigned the value of 0, meaning that the class has zero functions.
|
61,158,353
|
Get list of of Hours Between Two Datetime Variables
|
<p>I have a dataframe that looks like this: </p>
<pre><code>Date Name Provider Task StartDateTime LastDateTime
2020-01-01 00:00:00 Bob PEM ED A 7a-4p 2020-01-01 07:00:00 2020-01-01 16:00:00
2020-01-02 00:00:00 Tom PEM ED C 10p-2a 2020-01-02 22:00:00 2020-01-03 02:00:00
</code></pre>
<p>I would like to list the number of hours between each person's <code>StartDateTime</code> <code>LastDateTime</code>(datetime64[ns]) and then create an updated dataframe to reflect said lists. So for example, the updated dataframe would look like this:</p>
<pre><code>Name Date Hour
Bob 2020-01-01 7
Bob 2020-01-01 8
Bob 2020-01-01 9
...
Tom 2020-01-02 22
Tom 2020-01-02 23
Tom 2020-01-03 0
Tom 2020-01-03 1
...
</code></pre>
<p>I honestly do not have a solid idea where to start, I've found some articles that may provide a foundation but I'm not sure how to adapt my query to the below code since I want the counts based on the row and hour values. </p>
<pre><code>def daterange(date1, date2):
for n in range(int ((date2 - date1).days)+1):
yield date1 + timedelta(n)
start_dt = date(2015, 12, 20)
end_dt = date(2016, 1, 11)
for dt in daterange(start_dt, end_dt):
print(dt.strftime("%Y-%m-%d"))
</code></pre>
<p><a href="https://www.w3resource.com/python-exercises/date-time-exercise/python-date-time-exercise-50.php" rel="nofollow noreferrer">https://www.w3resource.com/python-exercises/date-time-exercise/python-date-time-exercise-50.php</a></p>
| 61,158,478
| 2020-04-11T14:28:35.877000
| 1
| null | 1
| 60
|
python|pandas
|
<p>Let us create the range of datetime then , use <code>explode</code> </p>
<pre><code>df['Date']=[pd.date_range(x,y , freq='H') for x , y in zip(df.StartDateTime,df.LastDateTime)]
s=df[['Date','Name']].explode('Date').reset_index(drop=True)
s['Hour']=s.Date.dt.hour
s['Date']=s.Date.dt.date
s.head()
Date Name Hour
0 2020-01-01 Bob 7
1 2020-01-01 Bob 8
2 2020-01-01 Bob 9
3 2020-01-01 Bob 10
4 2020-01-01 Bob 11
</code></pre>
| 2020-04-11T14:37:11.260000
| 2
|
https://pandas.pydata.org/docs/user_guide/timeseries.html
|
Time series / date functionality#
Time series / date functionality#
pandas contains extensive capabilities and features for working with time series data for all domains.
Using the NumPy datetime64 and timedelta64 dtypes, pandas has consolidated a large number of
features from other Python libraries like scikits.timeseries as well as created
a tremendous amount of new functionality for manipulating time series data.
For example, pandas supports:
Parsing time series information from various sources and formats
In [1]: import datetime
In [2]: dti = pd.to_datetime(
...: ["1/1/2018", np.datetime64("2018-01-01"), datetime.datetime(2018, 1, 1)]
...: )
...:
In [3]: dti
Out[3]: DatetimeIndex(['2018-01-01', '2018-01-01', '2018-01-01'], dtype='datetime64[ns]', freq=None)
Generate sequences of fixed-frequency dates and time spans
In [4]: dti = pd.date_range("2018-01-01", periods=3, freq="H")
Let us create the range of datetime then , use explode
df['Date']=[pd.date_range(x,y , freq='H') for x , y in zip(df.StartDateTime,df.LastDateTime)]
s=df[['Date','Name']].explode('Date').reset_index(drop=True)
s['Hour']=s.Date.dt.hour
s['Date']=s.Date.dt.date
s.head()
Date Name Hour
0 2020-01-01 Bob 7
1 2020-01-01 Bob 8
2 2020-01-01 Bob 9
3 2020-01-01 Bob 10
4 2020-01-01 Bob 11
In [5]: dti
Out[5]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 01:00:00',
'2018-01-01 02:00:00'],
dtype='datetime64[ns]', freq='H')
Manipulating and converting date times with timezone information
In [6]: dti = dti.tz_localize("UTC")
In [7]: dti
Out[7]:
DatetimeIndex(['2018-01-01 00:00:00+00:00', '2018-01-01 01:00:00+00:00',
'2018-01-01 02:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='H')
In [8]: dti.tz_convert("US/Pacific")
Out[8]:
DatetimeIndex(['2017-12-31 16:00:00-08:00', '2017-12-31 17:00:00-08:00',
'2017-12-31 18:00:00-08:00'],
dtype='datetime64[ns, US/Pacific]', freq='H')
Resampling or converting a time series to a particular frequency
In [9]: idx = pd.date_range("2018-01-01", periods=5, freq="H")
In [10]: ts = pd.Series(range(len(idx)), index=idx)
In [11]: ts
Out[11]:
2018-01-01 00:00:00 0
2018-01-01 01:00:00 1
2018-01-01 02:00:00 2
2018-01-01 03:00:00 3
2018-01-01 04:00:00 4
Freq: H, dtype: int64
In [12]: ts.resample("2H").mean()
Out[12]:
2018-01-01 00:00:00 0.5
2018-01-01 02:00:00 2.5
2018-01-01 04:00:00 4.0
Freq: 2H, dtype: float64
Performing date and time arithmetic with absolute or relative time increments
In [13]: friday = pd.Timestamp("2018-01-05")
In [14]: friday.day_name()
Out[14]: 'Friday'
# Add 1 day
In [15]: saturday = friday + pd.Timedelta("1 day")
In [16]: saturday.day_name()
Out[16]: 'Saturday'
# Add 1 business day (Friday --> Monday)
In [17]: monday = friday + pd.offsets.BDay()
In [18]: monday.day_name()
Out[18]: 'Monday'
pandas provides a relatively compact and self-contained set of tools for
performing the above tasks and more.
Overview#
pandas captures 4 general time related concepts:
Date times: A specific date and time with timezone support. Similar to datetime.datetime from the standard library.
Time deltas: An absolute time duration. Similar to datetime.timedelta from the standard library.
Time spans: A span of time defined by a point in time and its associated frequency.
Date offsets: A relative time duration that respects calendar arithmetic. Similar to dateutil.relativedelta.relativedelta from the dateutil package.
Concept
Scalar Class
Array Class
pandas Data Type
Primary Creation Method
Date times
Timestamp
DatetimeIndex
datetime64[ns] or datetime64[ns, tz]
to_datetime or date_range
Time deltas
Timedelta
TimedeltaIndex
timedelta64[ns]
to_timedelta or timedelta_range
Time spans
Period
PeriodIndex
period[freq]
Period or period_range
Date offsets
DateOffset
None
None
DateOffset
For time series data, it’s conventional to represent the time component in the index of a Series or DataFrame
so manipulations can be performed with respect to the time element.
In [19]: pd.Series(range(3), index=pd.date_range("2000", freq="D", periods=3))
Out[19]:
2000-01-01 0
2000-01-02 1
2000-01-03 2
Freq: D, dtype: int64
However, Series and DataFrame can directly also support the time component as data itself.
In [20]: pd.Series(pd.date_range("2000", freq="D", periods=3))
Out[20]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
Series and DataFrame have extended data type support and functionality for datetime, timedelta
and Period data when passed into those constructors. DateOffset
data however will be stored as object data.
In [21]: pd.Series(pd.period_range("1/1/2011", freq="M", periods=3))
Out[21]:
0 2011-01
1 2011-02
2 2011-03
dtype: period[M]
In [22]: pd.Series([pd.DateOffset(1), pd.DateOffset(2)])
Out[22]:
0 <DateOffset>
1 <2 * DateOffsets>
dtype: object
In [23]: pd.Series(pd.date_range("1/1/2011", freq="M", periods=3))
Out[23]:
0 2011-01-31
1 2011-02-28
2 2011-03-31
dtype: datetime64[ns]
Lastly, pandas represents null date times, time deltas, and time spans as NaT which
is useful for representing missing or null date like values and behaves similar
as np.nan does for float data.
In [24]: pd.Timestamp(pd.NaT)
Out[24]: NaT
In [25]: pd.Timedelta(pd.NaT)
Out[25]: NaT
In [26]: pd.Period(pd.NaT)
Out[26]: NaT
# Equality acts as np.nan would
In [27]: pd.NaT == pd.NaT
Out[27]: False
Timestamps vs. time spans#
Timestamped data is the most basic type of time series data that associates
values with points in time. For pandas objects it means using the points in
time.
In [28]: pd.Timestamp(datetime.datetime(2012, 5, 1))
Out[28]: Timestamp('2012-05-01 00:00:00')
In [29]: pd.Timestamp("2012-05-01")
Out[29]: Timestamp('2012-05-01 00:00:00')
In [30]: pd.Timestamp(2012, 5, 1)
Out[30]: Timestamp('2012-05-01 00:00:00')
However, in many cases it is more natural to associate things like change
variables with a time span instead. The span represented by Period can be
specified explicitly, or inferred from datetime string format.
For example:
In [31]: pd.Period("2011-01")
Out[31]: Period('2011-01', 'M')
In [32]: pd.Period("2012-05", freq="D")
Out[32]: Period('2012-05-01', 'D')
Timestamp and Period can serve as an index. Lists of
Timestamp and Period are automatically coerced to DatetimeIndex
and PeriodIndex respectively.
In [33]: dates = [
....: pd.Timestamp("2012-05-01"),
....: pd.Timestamp("2012-05-02"),
....: pd.Timestamp("2012-05-03"),
....: ]
....:
In [34]: ts = pd.Series(np.random.randn(3), dates)
In [35]: type(ts.index)
Out[35]: pandas.core.indexes.datetimes.DatetimeIndex
In [36]: ts.index
Out[36]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In [37]: ts
Out[37]:
2012-05-01 0.469112
2012-05-02 -0.282863
2012-05-03 -1.509059
dtype: float64
In [38]: periods = [pd.Period("2012-01"), pd.Period("2012-02"), pd.Period("2012-03")]
In [39]: ts = pd.Series(np.random.randn(3), periods)
In [40]: type(ts.index)
Out[40]: pandas.core.indexes.period.PeriodIndex
In [41]: ts.index
Out[41]: PeriodIndex(['2012-01', '2012-02', '2012-03'], dtype='period[M]')
In [42]: ts
Out[42]:
2012-01 -1.135632
2012-02 1.212112
2012-03 -0.173215
Freq: M, dtype: float64
pandas allows you to capture both representations and
convert between them. Under the hood, pandas represents timestamps using
instances of Timestamp and sequences of timestamps using instances of
DatetimeIndex. For regular time spans, pandas uses Period objects for
scalar values and PeriodIndex for sequences of spans. Better support for
irregular intervals with arbitrary start and end points are forth-coming in
future releases.
Converting to timestamps#
To convert a Series or list-like object of date-like objects e.g. strings,
epochs, or a mixture, you can use the to_datetime function. When passed
a Series, this returns a Series (with the same index), while a list-like
is converted to a DatetimeIndex:
In [43]: pd.to_datetime(pd.Series(["Jul 31, 2009", "2010-01-10", None]))
Out[43]:
0 2009-07-31
1 2010-01-10
2 NaT
dtype: datetime64[ns]
In [44]: pd.to_datetime(["2005/11/23", "2010.12.31"])
Out[44]: DatetimeIndex(['2005-11-23', '2010-12-31'], dtype='datetime64[ns]', freq=None)
If you use dates which start with the day first (i.e. European style),
you can pass the dayfirst flag:
In [45]: pd.to_datetime(["04-01-2012 10:00"], dayfirst=True)
Out[45]: DatetimeIndex(['2012-01-04 10:00:00'], dtype='datetime64[ns]', freq=None)
In [46]: pd.to_datetime(["14-01-2012", "01-14-2012"], dayfirst=True)
Out[46]: DatetimeIndex(['2012-01-14', '2012-01-14'], dtype='datetime64[ns]', freq=None)
Warning
You see in the above example that dayfirst isn’t strict. If a date
can’t be parsed with the day being first it will be parsed as if
dayfirst were False, and in the case of parsing delimited date strings
(e.g. 31-12-2012) then a warning will also be raised.
If you pass a single string to to_datetime, it returns a single Timestamp.
Timestamp can also accept string input, but it doesn’t accept string parsing
options like dayfirst or format, so use to_datetime if these are required.
In [47]: pd.to_datetime("2010/11/12")
Out[47]: Timestamp('2010-11-12 00:00:00')
In [48]: pd.Timestamp("2010/11/12")
Out[48]: Timestamp('2010-11-12 00:00:00')
You can also use the DatetimeIndex constructor directly:
In [49]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"])
Out[49]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq=None)
The string ‘infer’ can be passed in order to set the frequency of the index as the
inferred frequency upon creation:
In [50]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"], freq="infer")
Out[50]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq='2D')
Providing a format argument#
In addition to the required datetime string, a format argument can be passed to ensure specific parsing.
This could also potentially speed up the conversion considerably.
In [51]: pd.to_datetime("2010/11/12", format="%Y/%m/%d")
Out[51]: Timestamp('2010-11-12 00:00:00')
In [52]: pd.to_datetime("12-11-2010 00:00", format="%d-%m-%Y %H:%M")
Out[52]: Timestamp('2010-11-12 00:00:00')
For more information on the choices available when specifying the format
option, see the Python datetime documentation.
Assembling datetime from multiple DataFrame columns#
You can also pass a DataFrame of integer or string columns to assemble into a Series of Timestamps.
In [53]: df = pd.DataFrame(
....: {"year": [2015, 2016], "month": [2, 3], "day": [4, 5], "hour": [2, 3]}
....: )
....:
In [54]: pd.to_datetime(df)
Out[54]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
In [55]: pd.to_datetime(df[["year", "month", "day"]])
Out[55]:
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]
pd.to_datetime looks for standard designations of the datetime component in the column names, including:
required: year, month, day
optional: hour, minute, second, millisecond, microsecond, nanosecond
Invalid data#
The default behavior, errors='raise', is to raise when unparsable:
In [2]: pd.to_datetime(['2009/07/31', 'asd'], errors='raise')
ValueError: Unknown string format
Pass errors='ignore' to return the original input when unparsable:
In [56]: pd.to_datetime(["2009/07/31", "asd"], errors="ignore")
Out[56]: Index(['2009/07/31', 'asd'], dtype='object')
Pass errors='coerce' to convert unparsable data to NaT (not a time):
In [57]: pd.to_datetime(["2009/07/31", "asd"], errors="coerce")
Out[57]: DatetimeIndex(['2009-07-31', 'NaT'], dtype='datetime64[ns]', freq=None)
Epoch timestamps#
pandas supports converting integer or float epoch times to Timestamp and
DatetimeIndex. The default unit is nanoseconds, since that is how Timestamp
objects are stored internally. However, epochs are often stored in another unit
which can be specified. These are computed from the starting point specified by the
origin parameter.
In [58]: pd.to_datetime(
....: [1349720105, 1349806505, 1349892905, 1349979305, 1350065705], unit="s"
....: )
....:
Out[58]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05',
'2012-10-12 18:15:05'],
dtype='datetime64[ns]', freq=None)
In [59]: pd.to_datetime(
....: [1349720105100, 1349720105200, 1349720105300, 1349720105400, 1349720105500],
....: unit="ms",
....: )
....:
Out[59]:
DatetimeIndex(['2012-10-08 18:15:05.100000', '2012-10-08 18:15:05.200000',
'2012-10-08 18:15:05.300000', '2012-10-08 18:15:05.400000',
'2012-10-08 18:15:05.500000'],
dtype='datetime64[ns]', freq=None)
Note
The unit parameter does not use the same strings as the format parameter
that was discussed above). The
available units are listed on the documentation for pandas.to_datetime().
Changed in version 1.0.0.
Constructing a Timestamp or DatetimeIndex with an epoch timestamp
with the tz argument specified will raise a ValueError. If you have
epochs in wall time in another timezone, you can read the epochs
as timezone-naive timestamps and then localize to the appropriate timezone:
In [60]: pd.Timestamp(1262347200000000000).tz_localize("US/Pacific")
Out[60]: Timestamp('2010-01-01 12:00:00-0800', tz='US/Pacific')
In [61]: pd.DatetimeIndex([1262347200000000000]).tz_localize("US/Pacific")
Out[61]: DatetimeIndex(['2010-01-01 12:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None)
Note
Epoch times will be rounded to the nearest nanosecond.
Warning
Conversion of float epoch times can lead to inaccurate and unexpected results.
Python floats have about 15 digits precision in
decimal. Rounding during conversion from float to high precision Timestamp is
unavoidable. The only way to achieve exact precision is to use a fixed-width
types (e.g. an int64).
In [62]: pd.to_datetime([1490195805.433, 1490195805.433502912], unit="s")
Out[62]: DatetimeIndex(['2017-03-22 15:16:45.433000088', '2017-03-22 15:16:45.433502913'], dtype='datetime64[ns]', freq=None)
In [63]: pd.to_datetime(1490195805433502912, unit="ns")
Out[63]: Timestamp('2017-03-22 15:16:45.433502912')
See also
Using the origin parameter
From timestamps to epoch#
To invert the operation from above, namely, to convert from a Timestamp to a ‘unix’ epoch:
In [64]: stamps = pd.date_range("2012-10-08 18:15:05", periods=4, freq="D")
In [65]: stamps
Out[65]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05'],
dtype='datetime64[ns]', freq='D')
We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by the
“unit” (1 second).
In [66]: (stamps - pd.Timestamp("1970-01-01")) // pd.Timedelta("1s")
Out[66]: Int64Index([1349720105, 1349806505, 1349892905, 1349979305], dtype='int64')
Using the origin parameter#
Using the origin parameter, one can specify an alternative starting point for creation
of a DatetimeIndex. For example, to use 1960-01-01 as the starting date:
In [67]: pd.to_datetime([1, 2, 3], unit="D", origin=pd.Timestamp("1960-01-01"))
Out[67]: DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00.
Commonly called ‘unix epoch’ or POSIX time.
In [68]: pd.to_datetime([1, 2, 3], unit="D")
Out[68]: DatetimeIndex(['1970-01-02', '1970-01-03', '1970-01-04'], dtype='datetime64[ns]', freq=None)
Generating ranges of timestamps#
To generate an index with timestamps, you can use either the DatetimeIndex or
Index constructor and pass in a list of datetime objects:
In [69]: dates = [
....: datetime.datetime(2012, 5, 1),
....: datetime.datetime(2012, 5, 2),
....: datetime.datetime(2012, 5, 3),
....: ]
....:
# Note the frequency information
In [70]: index = pd.DatetimeIndex(dates)
In [71]: index
Out[71]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
# Automatically converted to DatetimeIndex
In [72]: index = pd.Index(dates)
In [73]: index
Out[73]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In practice this becomes very cumbersome because we often need a very long
index with a large number of timestamps. If we need timestamps on a regular
frequency, we can use the date_range() and bdate_range() functions
to create a DatetimeIndex. The default frequency for date_range is a
calendar day while the default for bdate_range is a business day:
In [74]: start = datetime.datetime(2011, 1, 1)
In [75]: end = datetime.datetime(2012, 1, 1)
In [76]: index = pd.date_range(start, end)
In [77]: index
Out[77]:
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04',
'2011-01-05', '2011-01-06', '2011-01-07', '2011-01-08',
'2011-01-09', '2011-01-10',
...
'2011-12-23', '2011-12-24', '2011-12-25', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30',
'2011-12-31', '2012-01-01'],
dtype='datetime64[ns]', length=366, freq='D')
In [78]: index = pd.bdate_range(start, end)
In [79]: index
Out[79]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14',
...
'2011-12-19', '2011-12-20', '2011-12-21', '2011-12-22',
'2011-12-23', '2011-12-26', '2011-12-27', '2011-12-28',
'2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', length=260, freq='B')
Convenience functions like date_range and bdate_range can utilize a
variety of frequency aliases:
In [80]: pd.date_range(start, periods=1000, freq="M")
Out[80]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-30',
'2011-05-31', '2011-06-30', '2011-07-31', '2011-08-31',
'2011-09-30', '2011-10-31',
...
'2093-07-31', '2093-08-31', '2093-09-30', '2093-10-31',
'2093-11-30', '2093-12-31', '2094-01-31', '2094-02-28',
'2094-03-31', '2094-04-30'],
dtype='datetime64[ns]', length=1000, freq='M')
In [81]: pd.bdate_range(start, periods=250, freq="BQS")
Out[81]:
DatetimeIndex(['2011-01-03', '2011-04-01', '2011-07-01', '2011-10-03',
'2012-01-02', '2012-04-02', '2012-07-02', '2012-10-01',
'2013-01-01', '2013-04-01',
...
'2071-01-01', '2071-04-01', '2071-07-01', '2071-10-01',
'2072-01-01', '2072-04-01', '2072-07-01', '2072-10-03',
'2073-01-02', '2073-04-03'],
dtype='datetime64[ns]', length=250, freq='BQS-JAN')
date_range and bdate_range make it easy to generate a range of dates
using various combinations of parameters like start, end, periods,
and freq. The start and end dates are strictly inclusive, so dates outside
of those specified will not be generated:
In [82]: pd.date_range(start, end, freq="BM")
Out[82]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [83]: pd.date_range(start, end, freq="W")
Out[83]:
DatetimeIndex(['2011-01-02', '2011-01-09', '2011-01-16', '2011-01-23',
'2011-01-30', '2011-02-06', '2011-02-13', '2011-02-20',
'2011-02-27', '2011-03-06', '2011-03-13', '2011-03-20',
'2011-03-27', '2011-04-03', '2011-04-10', '2011-04-17',
'2011-04-24', '2011-05-01', '2011-05-08', '2011-05-15',
'2011-05-22', '2011-05-29', '2011-06-05', '2011-06-12',
'2011-06-19', '2011-06-26', '2011-07-03', '2011-07-10',
'2011-07-17', '2011-07-24', '2011-07-31', '2011-08-07',
'2011-08-14', '2011-08-21', '2011-08-28', '2011-09-04',
'2011-09-11', '2011-09-18', '2011-09-25', '2011-10-02',
'2011-10-09', '2011-10-16', '2011-10-23', '2011-10-30',
'2011-11-06', '2011-11-13', '2011-11-20', '2011-11-27',
'2011-12-04', '2011-12-11', '2011-12-18', '2011-12-25',
'2012-01-01'],
dtype='datetime64[ns]', freq='W-SUN')
In [84]: pd.bdate_range(end=end, periods=20)
Out[84]:
DatetimeIndex(['2011-12-05', '2011-12-06', '2011-12-07', '2011-12-08',
'2011-12-09', '2011-12-12', '2011-12-13', '2011-12-14',
'2011-12-15', '2011-12-16', '2011-12-19', '2011-12-20',
'2011-12-21', '2011-12-22', '2011-12-23', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', freq='B')
In [85]: pd.bdate_range(start=start, periods=20)
Out[85]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14', '2011-01-17', '2011-01-18',
'2011-01-19', '2011-01-20', '2011-01-21', '2011-01-24',
'2011-01-25', '2011-01-26', '2011-01-27', '2011-01-28'],
dtype='datetime64[ns]', freq='B')
Specifying start, end, and periods will generate a range of evenly spaced
dates from start to end inclusively, with periods number of elements in the
resulting DatetimeIndex:
In [86]: pd.date_range("2018-01-01", "2018-01-05", periods=5)
Out[86]:
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05'],
dtype='datetime64[ns]', freq=None)
In [87]: pd.date_range("2018-01-01", "2018-01-05", periods=10)
Out[87]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 10:40:00',
'2018-01-01 21:20:00', '2018-01-02 08:00:00',
'2018-01-02 18:40:00', '2018-01-03 05:20:00',
'2018-01-03 16:00:00', '2018-01-04 02:40:00',
'2018-01-04 13:20:00', '2018-01-05 00:00:00'],
dtype='datetime64[ns]', freq=None)
Custom frequency ranges#
bdate_range can also generate a range of custom frequency dates by using
the weekmask and holidays parameters. These parameters will only be
used if a custom frequency string is passed.
In [88]: weekmask = "Mon Wed Fri"
In [89]: holidays = [datetime.datetime(2011, 1, 5), datetime.datetime(2011, 3, 14)]
In [90]: pd.bdate_range(start, end, freq="C", weekmask=weekmask, holidays=holidays)
Out[90]:
DatetimeIndex(['2011-01-03', '2011-01-07', '2011-01-10', '2011-01-12',
'2011-01-14', '2011-01-17', '2011-01-19', '2011-01-21',
'2011-01-24', '2011-01-26',
...
'2011-12-09', '2011-12-12', '2011-12-14', '2011-12-16',
'2011-12-19', '2011-12-21', '2011-12-23', '2011-12-26',
'2011-12-28', '2011-12-30'],
dtype='datetime64[ns]', length=154, freq='C')
In [91]: pd.bdate_range(start, end, freq="CBMS", weekmask=weekmask)
Out[91]:
DatetimeIndex(['2011-01-03', '2011-02-02', '2011-03-02', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-02', '2011-10-03', '2011-11-02', '2011-12-02'],
dtype='datetime64[ns]', freq='CBMS')
See also
Custom business days
Timestamp limitations#
Since pandas represents timestamps in nanosecond resolution, the time span that
can be represented using a 64-bit integer is limited to approximately 584 years:
In [92]: pd.Timestamp.min
Out[92]: Timestamp('1677-09-21 00:12:43.145224193')
In [93]: pd.Timestamp.max
Out[93]: Timestamp('2262-04-11 23:47:16.854775807')
See also
Representing out-of-bounds spans
Indexing#
One of the main uses for DatetimeIndex is as an index for pandas objects.
The DatetimeIndex class contains many time series related optimizations:
A large range of dates for various offsets are pre-computed and cached
under the hood in order to make generating subsequent date ranges very fast
(just have to grab a slice).
Fast shifting using the shift method on pandas objects.
Unioning of overlapping DatetimeIndex objects with the same frequency is
very fast (important for fast data alignment).
Quick access to date fields via properties such as year, month, etc.
Regularization functions like snap and very fast asof logic.
DatetimeIndex objects have all the basic functionality of regular Index
objects, and a smorgasbord of advanced time series specific methods for easy
frequency processing.
See also
Reindexing methods
Note
While pandas does not force you to have a sorted date index, some of these
methods may have unexpected or incorrect behavior if the dates are unsorted.
DatetimeIndex can be used like a regular index and offers all of its
intelligent functionality like selection, slicing, etc.
In [94]: rng = pd.date_range(start, end, freq="BM")
In [95]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [96]: ts.index
Out[96]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [97]: ts[:5].index
Out[97]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31'],
dtype='datetime64[ns]', freq='BM')
In [98]: ts[::2].index
Out[98]:
DatetimeIndex(['2011-01-31', '2011-03-31', '2011-05-31', '2011-07-29',
'2011-09-30', '2011-11-30'],
dtype='datetime64[ns]', freq='2BM')
Partial string indexing#
Dates and strings that parse to timestamps can be passed as indexing parameters:
In [99]: ts["1/31/2011"]
Out[99]: 0.11920871129693428
In [100]: ts[datetime.datetime(2011, 12, 25):]
Out[100]:
2011-12-30 0.56702
Freq: BM, dtype: float64
In [101]: ts["10/31/2011":"12/31/2011"]
Out[101]:
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
To provide convenience for accessing longer time series, you can also pass in
the year or year and month as strings:
In [102]: ts["2011"]
Out[102]:
2011-01-31 0.119209
2011-02-28 -1.044236
2011-03-31 -0.861849
2011-04-29 -2.104569
2011-05-31 -0.494929
2011-06-30 1.071804
2011-07-29 0.721555
2011-08-31 -0.706771
2011-09-30 -1.039575
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
In [103]: ts["2011-6"]
Out[103]:
2011-06-30 1.071804
Freq: BM, dtype: float64
This type of slicing will work on a DataFrame with a DatetimeIndex as well. Since the
partial string selection is a form of label slicing, the endpoints will be included. This
would include matching times on an included date:
Warning
Indexing DataFrame rows with a single string with getitem (e.g. frame[dtstring])
is deprecated starting with pandas 1.2.0 (given the ambiguity whether it is indexing
the rows or selecting a column) and will be removed in a future version. The equivalent
with .loc (e.g. frame.loc[dtstring]) is still supported.
In [104]: dft = pd.DataFrame(
.....: np.random.randn(100000, 1),
.....: columns=["A"],
.....: index=pd.date_range("20130101", periods=100000, freq="T"),
.....: )
.....:
In [105]: dft
Out[105]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
In [106]: dft.loc["2013"]
Out[106]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
This starts on the very first time in the month, and includes the last date and
time for the month:
In [107]: dft["2013-1":"2013-2"]
Out[107]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies a stop time that includes all of the times on the last day:
In [108]: dft["2013-1":"2013-2-28"]
Out[108]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies an exact stop time (and is not the same as the above):
In [109]: dft["2013-1":"2013-2-28 00:00:00"]
Out[109]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
We are stopping on the included end-point as it is part of the index:
In [110]: dft["2013-1-15":"2013-1-15 12:30:00"]
Out[110]:
A
2013-01-15 00:00:00 -0.984810
2013-01-15 00:01:00 0.941451
2013-01-15 00:02:00 1.559365
2013-01-15 00:03:00 1.034374
2013-01-15 00:04:00 -1.480656
... ...
2013-01-15 12:26:00 0.371454
2013-01-15 12:27:00 -0.930806
2013-01-15 12:28:00 -0.069177
2013-01-15 12:29:00 0.066510
2013-01-15 12:30:00 -0.003945
[751 rows x 1 columns]
DatetimeIndex partial string indexing also works on a DataFrame with a MultiIndex:
In [111]: dft2 = pd.DataFrame(
.....: np.random.randn(20, 1),
.....: columns=["A"],
.....: index=pd.MultiIndex.from_product(
.....: [pd.date_range("20130101", periods=10, freq="12H"), ["a", "b"]]
.....: ),
.....: )
.....:
In [112]: dft2
Out[112]:
A
2013-01-01 00:00:00 a -0.298694
b 0.823553
2013-01-01 12:00:00 a 0.943285
b -1.479399
2013-01-02 00:00:00 a -1.643342
... ...
2013-01-04 12:00:00 b 0.069036
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
[20 rows x 1 columns]
In [113]: dft2.loc["2013-01-05"]
Out[113]:
A
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
In [114]: idx = pd.IndexSlice
In [115]: dft2 = dft2.swaplevel(0, 1).sort_index()
In [116]: dft2.loc[idx[:, "2013-01-05"], :]
Out[116]:
A
a 2013-01-05 00:00:00 0.122297
2013-01-05 12:00:00 0.370079
b 2013-01-05 00:00:00 1.422060
2013-01-05 12:00:00 1.016331
New in version 0.25.0.
Slicing with string indexing also honors UTC offset.
In [117]: df = pd.DataFrame([0], index=pd.DatetimeIndex(["2019-01-01"], tz="US/Pacific"))
In [118]: df
Out[118]:
0
2019-01-01 00:00:00-08:00 0
In [119]: df["2019-01-01 12:00:00+04:00":"2019-01-01 13:00:00+04:00"]
Out[119]:
0
2019-01-01 00:00:00-08:00 0
Slice vs. exact match#
The same string used as an indexing parameter can be treated either as a slice or as an exact match depending on the resolution of the index. If the string is less accurate than the index, it will be treated as a slice, otherwise as an exact match.
Consider a Series object with a minute resolution index:
In [120]: series_minute = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:00", "2012-01-01 00:00:00", "2012-01-01 00:02:00"]
.....: ),
.....: )
.....:
In [121]: series_minute.index.resolution
Out[121]: 'minute'
A timestamp string less accurate than a minute gives a Series object.
In [122]: series_minute["2011-12-31 23"]
Out[122]:
2011-12-31 23:59:00 1
dtype: int64
A timestamp string with minute resolution (or more accurate), gives a scalar instead, i.e. it is not casted to a slice.
In [123]: series_minute["2011-12-31 23:59"]
Out[123]: 1
In [124]: series_minute["2011-12-31 23:59:00"]
Out[124]: 1
If index resolution is second, then the minute-accurate timestamp gives a
Series.
In [125]: series_second = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:59", "2012-01-01 00:00:00", "2012-01-01 00:00:01"]
.....: ),
.....: )
.....:
In [126]: series_second.index.resolution
Out[126]: 'second'
In [127]: series_second["2011-12-31 23:59"]
Out[127]:
2011-12-31 23:59:59 1
dtype: int64
If the timestamp string is treated as a slice, it can be used to index DataFrame with .loc[] as well.
In [128]: dft_minute = pd.DataFrame(
.....: {"a": [1, 2, 3], "b": [4, 5, 6]}, index=series_minute.index
.....: )
.....:
In [129]: dft_minute.loc["2011-12-31 23"]
Out[129]:
a b
2011-12-31 23:59:00 1 4
Warning
However, if the string is treated as an exact match, the selection in DataFrame’s [] will be column-wise and not row-wise, see Indexing Basics. For example dft_minute['2011-12-31 23:59'] will raise KeyError as '2012-12-31 23:59' has the same resolution as the index and there is no column with such name:
To always have unambiguous selection, whether the row is treated as a slice or a single selection, use .loc.
In [130]: dft_minute.loc["2011-12-31 23:59"]
Out[130]:
a 1
b 4
Name: 2011-12-31 23:59:00, dtype: int64
Note also that DatetimeIndex resolution cannot be less precise than day.
In [131]: series_monthly = pd.Series(
.....: [1, 2, 3], pd.DatetimeIndex(["2011-12", "2012-01", "2012-02"])
.....: )
.....:
In [132]: series_monthly.index.resolution
Out[132]: 'day'
In [133]: series_monthly["2011-12"] # returns Series
Out[133]:
2011-12-01 1
dtype: int64
Exact indexing#
As discussed in previous section, indexing a DatetimeIndex with a partial string depends on the “accuracy” of the period, in other words how specific the interval is in relation to the resolution of the index. In contrast, indexing with Timestamp or datetime objects is exact, because the objects have exact meaning. These also follow the semantics of including both endpoints.
These Timestamp and datetime objects have exact hours, minutes, and seconds, even though they were not explicitly specified (they are 0).
In [134]: dft[datetime.datetime(2013, 1, 1): datetime.datetime(2013, 2, 28)]
Out[134]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
With no defaults.
In [135]: dft[
.....: datetime.datetime(2013, 1, 1, 10, 12, 0): datetime.datetime(
.....: 2013, 2, 28, 10, 12, 0
.....: )
.....: ]
.....:
Out[135]:
A
2013-01-01 10:12:00 0.565375
2013-01-01 10:13:00 0.068184
2013-01-01 10:14:00 0.788871
2013-01-01 10:15:00 -0.280343
2013-01-01 10:16:00 0.931536
... ...
2013-02-28 10:08:00 0.148098
2013-02-28 10:09:00 -0.388138
2013-02-28 10:10:00 0.139348
2013-02-28 10:11:00 0.085288
2013-02-28 10:12:00 0.950146
[83521 rows x 1 columns]
Truncating & fancy indexing#
A truncate() convenience function is provided that is similar
to slicing. Note that truncate assumes a 0 value for any unspecified date
component in a DatetimeIndex in contrast to slicing which returns any
partially matching dates:
In [136]: rng2 = pd.date_range("2011-01-01", "2012-01-01", freq="W")
In [137]: ts2 = pd.Series(np.random.randn(len(rng2)), index=rng2)
In [138]: ts2.truncate(before="2011-11", after="2011-12")
Out[138]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
Freq: W-SUN, dtype: float64
In [139]: ts2["2011-11":"2011-12"]
Out[139]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
2011-12-04 0.046611
2011-12-11 0.059478
2011-12-18 -0.286539
2011-12-25 0.841669
Freq: W-SUN, dtype: float64
Even complicated fancy indexing that breaks the DatetimeIndex frequency
regularity will result in a DatetimeIndex, although frequency is lost:
In [140]: ts2[[0, 2, 6]].index
Out[140]: DatetimeIndex(['2011-01-02', '2011-01-16', '2011-02-13'], dtype='datetime64[ns]', freq=None)
Time/date components#
There are several time/date properties that one can access from Timestamp or a collection of timestamps like a DatetimeIndex.
Property
Description
year
The year of the datetime
month
The month of the datetime
day
The days of the datetime
hour
The hour of the datetime
minute
The minutes of the datetime
second
The seconds of the datetime
microsecond
The microseconds of the datetime
nanosecond
The nanoseconds of the datetime
date
Returns datetime.date (does not contain timezone information)
time
Returns datetime.time (does not contain timezone information)
timetz
Returns datetime.time as local time with timezone information
dayofyear
The ordinal day of year
day_of_year
The ordinal day of year
weekofyear
The week ordinal of the year
week
The week ordinal of the year
dayofweek
The number of the day of the week with Monday=0, Sunday=6
day_of_week
The number of the day of the week with Monday=0, Sunday=6
weekday
The number of the day of the week with Monday=0, Sunday=6
quarter
Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc.
days_in_month
The number of days in the month of the datetime
is_month_start
Logical indicating if first day of month (defined by frequency)
is_month_end
Logical indicating if last day of month (defined by frequency)
is_quarter_start
Logical indicating if first day of quarter (defined by frequency)
is_quarter_end
Logical indicating if last day of quarter (defined by frequency)
is_year_start
Logical indicating if first day of year (defined by frequency)
is_year_end
Logical indicating if last day of year (defined by frequency)
is_leap_year
Logical indicating if the date belongs to a leap year
Furthermore, if you have a Series with datetimelike values, then you can
access these properties via the .dt accessor, as detailed in the section
on .dt accessors.
New in version 1.1.0.
You may obtain the year, week and day components of the ISO year from the ISO 8601 standard:
In [141]: idx = pd.date_range(start="2019-12-29", freq="D", periods=4)
In [142]: idx.isocalendar()
Out[142]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
In [143]: idx.to_series().dt.isocalendar()
Out[143]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
DateOffset objects#
In the preceding examples, frequency strings (e.g. 'D') were used to specify
a frequency that defined:
how the date times in DatetimeIndex were spaced when using date_range()
the frequency of a Period or PeriodIndex
These frequency strings map to a DateOffset object and its subclasses. A DateOffset
is similar to a Timedelta that represents a duration of time but follows specific calendar duration rules.
For example, a Timedelta day will always increment datetimes by 24 hours, while a DateOffset day
will increment datetimes to the same time the next day whether a day represents 23, 24 or 25 hours due to daylight
savings time. However, all DateOffset subclasses that are an hour or smaller
(Hour, Minute, Second, Milli, Micro, Nano) behave like
Timedelta and respect absolute time.
The basic DateOffset acts similar to dateutil.relativedelta (relativedelta documentation)
that shifts a date time by the corresponding calendar duration specified. The
arithmetic operator (+) can be used to perform the shift.
# This particular day contains a day light savings time transition
In [144]: ts = pd.Timestamp("2016-10-30 00:00:00", tz="Europe/Helsinki")
# Respects absolute time
In [145]: ts + pd.Timedelta(days=1)
Out[145]: Timestamp('2016-10-30 23:00:00+0200', tz='Europe/Helsinki')
# Respects calendar time
In [146]: ts + pd.DateOffset(days=1)
Out[146]: Timestamp('2016-10-31 00:00:00+0200', tz='Europe/Helsinki')
In [147]: friday = pd.Timestamp("2018-01-05")
In [148]: friday.day_name()
Out[148]: 'Friday'
# Add 2 business days (Friday --> Tuesday)
In [149]: two_business_days = 2 * pd.offsets.BDay()
In [150]: friday + two_business_days
Out[150]: Timestamp('2018-01-09 00:00:00')
In [151]: (friday + two_business_days).day_name()
Out[151]: 'Tuesday'
Most DateOffsets have associated frequencies strings, or offset aliases, that can be passed
into freq keyword arguments. The available date offsets and associated frequency strings can be found below:
Date Offset
Frequency String
Description
DateOffset
None
Generic offset class, defaults to absolute 24 hours
BDay or BusinessDay
'B'
business day (weekday)
CDay or CustomBusinessDay
'C'
custom business day
Week
'W'
one week, optionally anchored on a day of the week
WeekOfMonth
'WOM'
the x-th day of the y-th week of each month
LastWeekOfMonth
'LWOM'
the x-th day of the last week of each month
MonthEnd
'M'
calendar month end
MonthBegin
'MS'
calendar month begin
BMonthEnd or BusinessMonthEnd
'BM'
business month end
BMonthBegin or BusinessMonthBegin
'BMS'
business month begin
CBMonthEnd or CustomBusinessMonthEnd
'CBM'
custom business month end
CBMonthBegin or CustomBusinessMonthBegin
'CBMS'
custom business month begin
SemiMonthEnd
'SM'
15th (or other day_of_month) and calendar month end
SemiMonthBegin
'SMS'
15th (or other day_of_month) and calendar month begin
QuarterEnd
'Q'
calendar quarter end
QuarterBegin
'QS'
calendar quarter begin
BQuarterEnd
'BQ
business quarter end
BQuarterBegin
'BQS'
business quarter begin
FY5253Quarter
'REQ'
retail (aka 52-53 week) quarter
YearEnd
'A'
calendar year end
YearBegin
'AS' or 'BYS'
calendar year begin
BYearEnd
'BA'
business year end
BYearBegin
'BAS'
business year begin
FY5253
'RE'
retail (aka 52-53 week) year
Easter
None
Easter holiday
BusinessHour
'BH'
business hour
CustomBusinessHour
'CBH'
custom business hour
Day
'D'
one absolute day
Hour
'H'
one hour
Minute
'T' or 'min'
one minute
Second
'S'
one second
Milli
'L' or 'ms'
one millisecond
Micro
'U' or 'us'
one microsecond
Nano
'N'
one nanosecond
DateOffsets additionally have rollforward() and rollback()
methods for moving a date forward or backward respectively to a valid offset
date relative to the offset. For example, business offsets will roll dates
that land on the weekends (Saturday and Sunday) forward to Monday since
business offsets operate on the weekdays.
In [152]: ts = pd.Timestamp("2018-01-06 00:00:00")
In [153]: ts.day_name()
Out[153]: 'Saturday'
# BusinessHour's valid offset dates are Monday through Friday
In [154]: offset = pd.offsets.BusinessHour(start="09:00")
# Bring the date to the closest offset date (Monday)
In [155]: offset.rollforward(ts)
Out[155]: Timestamp('2018-01-08 09:00:00')
# Date is brought to the closest offset date first and then the hour is added
In [156]: ts + offset
Out[156]: Timestamp('2018-01-08 10:00:00')
These operations preserve time (hour, minute, etc) information by default.
To reset time to midnight, use normalize() before or after applying
the operation (depending on whether you want the time information included
in the operation).
In [157]: ts = pd.Timestamp("2014-01-01 09:00")
In [158]: day = pd.offsets.Day()
In [159]: day + ts
Out[159]: Timestamp('2014-01-02 09:00:00')
In [160]: (day + ts).normalize()
Out[160]: Timestamp('2014-01-02 00:00:00')
In [161]: ts = pd.Timestamp("2014-01-01 22:00")
In [162]: hour = pd.offsets.Hour()
In [163]: hour + ts
Out[163]: Timestamp('2014-01-01 23:00:00')
In [164]: (hour + ts).normalize()
Out[164]: Timestamp('2014-01-01 00:00:00')
In [165]: (hour + pd.Timestamp("2014-01-01 23:30")).normalize()
Out[165]: Timestamp('2014-01-02 00:00:00')
Parametric offsets#
Some of the offsets can be “parameterized” when created to result in different
behaviors. For example, the Week offset for generating weekly data accepts a
weekday parameter which results in the generated dates always lying on a
particular day of the week:
In [166]: d = datetime.datetime(2008, 8, 18, 9, 0)
In [167]: d
Out[167]: datetime.datetime(2008, 8, 18, 9, 0)
In [168]: d + pd.offsets.Week()
Out[168]: Timestamp('2008-08-25 09:00:00')
In [169]: d + pd.offsets.Week(weekday=4)
Out[169]: Timestamp('2008-08-22 09:00:00')
In [170]: (d + pd.offsets.Week(weekday=4)).weekday()
Out[170]: 4
In [171]: d - pd.offsets.Week()
Out[171]: Timestamp('2008-08-11 09:00:00')
The normalize option will be effective for addition and subtraction.
In [172]: d + pd.offsets.Week(normalize=True)
Out[172]: Timestamp('2008-08-25 00:00:00')
In [173]: d - pd.offsets.Week(normalize=True)
Out[173]: Timestamp('2008-08-11 00:00:00')
Another example is parameterizing YearEnd with the specific ending month:
In [174]: d + pd.offsets.YearEnd()
Out[174]: Timestamp('2008-12-31 09:00:00')
In [175]: d + pd.offsets.YearEnd(month=6)
Out[175]: Timestamp('2009-06-30 09:00:00')
Using offsets with Series / DatetimeIndex#
Offsets can be used with either a Series or DatetimeIndex to
apply the offset to each element.
In [176]: rng = pd.date_range("2012-01-01", "2012-01-03")
In [177]: s = pd.Series(rng)
In [178]: rng
Out[178]: DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03'], dtype='datetime64[ns]', freq='D')
In [179]: rng + pd.DateOffset(months=2)
Out[179]: DatetimeIndex(['2012-03-01', '2012-03-02', '2012-03-03'], dtype='datetime64[ns]', freq=None)
In [180]: s + pd.DateOffset(months=2)
Out[180]:
0 2012-03-01
1 2012-03-02
2 2012-03-03
dtype: datetime64[ns]
In [181]: s - pd.DateOffset(months=2)
Out[181]:
0 2011-11-01
1 2011-11-02
2 2011-11-03
dtype: datetime64[ns]
If the offset class maps directly to a Timedelta (Day, Hour,
Minute, Second, Micro, Milli, Nano) it can be
used exactly like a Timedelta - see the
Timedelta section for more examples.
In [182]: s - pd.offsets.Day(2)
Out[182]:
0 2011-12-30
1 2011-12-31
2 2012-01-01
dtype: datetime64[ns]
In [183]: td = s - pd.Series(pd.date_range("2011-12-29", "2011-12-31"))
In [184]: td
Out[184]:
0 3 days
1 3 days
2 3 days
dtype: timedelta64[ns]
In [185]: td + pd.offsets.Minute(15)
Out[185]:
0 3 days 00:15:00
1 3 days 00:15:00
2 3 days 00:15:00
dtype: timedelta64[ns]
Note that some offsets (such as BQuarterEnd) do not have a
vectorized implementation. They can still be used but may
calculate significantly slower and will show a PerformanceWarning
In [186]: rng + pd.offsets.BQuarterEnd()
Out[186]: DatetimeIndex(['2012-03-30', '2012-03-30', '2012-03-30'], dtype='datetime64[ns]', freq=None)
Custom business days#
The CDay or CustomBusinessDay class provides a parametric
BusinessDay class which can be used to create customized business day
calendars which account for local holidays and local weekend conventions.
As an interesting example, let’s look at Egypt where a Friday-Saturday weekend is observed.
In [187]: weekmask_egypt = "Sun Mon Tue Wed Thu"
# They also observe International Workers' Day so let's
# add that for a couple of years
In [188]: holidays = [
.....: "2012-05-01",
.....: datetime.datetime(2013, 5, 1),
.....: np.datetime64("2014-05-01"),
.....: ]
.....:
In [189]: bday_egypt = pd.offsets.CustomBusinessDay(
.....: holidays=holidays,
.....: weekmask=weekmask_egypt,
.....: )
.....:
In [190]: dt = datetime.datetime(2013, 4, 30)
In [191]: dt + 2 * bday_egypt
Out[191]: Timestamp('2013-05-05 00:00:00')
Let’s map to the weekday names:
In [192]: dts = pd.date_range(dt, periods=5, freq=bday_egypt)
In [193]: pd.Series(dts.weekday, dts).map(pd.Series("Mon Tue Wed Thu Fri Sat Sun".split()))
Out[193]:
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, dtype: object
Holiday calendars can be used to provide the list of holidays. See the
holiday calendar section for more information.
In [194]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [195]: bday_us = pd.offsets.CustomBusinessDay(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [196]: dt = datetime.datetime(2014, 1, 17)
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [197]: dt + bday_us
Out[197]: Timestamp('2014-01-21 00:00:00')
Monthly offsets that respect a certain holiday calendar can be defined
in the usual way.
In [198]: bmth_us = pd.offsets.CustomBusinessMonthBegin(calendar=USFederalHolidayCalendar())
# Skip new years
In [199]: dt = datetime.datetime(2013, 12, 17)
In [200]: dt + bmth_us
Out[200]: Timestamp('2014-01-02 00:00:00')
# Define date index with custom offset
In [201]: pd.date_range(start="20100101", end="20120101", freq=bmth_us)
Out[201]:
DatetimeIndex(['2010-01-04', '2010-02-01', '2010-03-01', '2010-04-01',
'2010-05-03', '2010-06-01', '2010-07-01', '2010-08-02',
'2010-09-01', '2010-10-01', '2010-11-01', '2010-12-01',
'2011-01-03', '2011-02-01', '2011-03-01', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-01', '2011-10-03', '2011-11-01', '2011-12-01'],
dtype='datetime64[ns]', freq='CBMS')
Note
The frequency string ‘C’ is used to indicate that a CustomBusinessDay
DateOffset is used, it is important to note that since CustomBusinessDay is
a parameterised type, instances of CustomBusinessDay may differ and this is
not detectable from the ‘C’ frequency string. The user therefore needs to
ensure that the ‘C’ frequency string is used consistently within the user’s
application.
Business hour#
The BusinessHour class provides a business hour representation on BusinessDay,
allowing to use specific start and end times.
By default, BusinessHour uses 9:00 - 17:00 as business hours.
Adding BusinessHour will increment Timestamp by hourly frequency.
If target Timestamp is out of business hours, move to the next business hour
then increment it. If the result exceeds the business hours end, the remaining
hours are added to the next business day.
In [202]: bh = pd.offsets.BusinessHour()
In [203]: bh
Out[203]: <BusinessHour: BH=09:00-17:00>
# 2014-08-01 is Friday
In [204]: pd.Timestamp("2014-08-01 10:00").weekday()
Out[204]: 4
In [205]: pd.Timestamp("2014-08-01 10:00") + bh
Out[205]: Timestamp('2014-08-01 11:00:00')
# Below example is the same as: pd.Timestamp('2014-08-01 09:00') + bh
In [206]: pd.Timestamp("2014-08-01 08:00") + bh
Out[206]: Timestamp('2014-08-01 10:00:00')
# If the results is on the end time, move to the next business day
In [207]: pd.Timestamp("2014-08-01 16:00") + bh
Out[207]: Timestamp('2014-08-04 09:00:00')
# Remainings are added to the next day
In [208]: pd.Timestamp("2014-08-01 16:30") + bh
Out[208]: Timestamp('2014-08-04 09:30:00')
# Adding 2 business hours
In [209]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(2)
Out[209]: Timestamp('2014-08-01 12:00:00')
# Subtracting 3 business hours
In [210]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(-3)
Out[210]: Timestamp('2014-07-31 15:00:00')
You can also specify start and end time by keywords. The argument must
be a str with an hour:minute representation or a datetime.time
instance. Specifying seconds, microseconds and nanoseconds as business hour
results in ValueError.
In [211]: bh = pd.offsets.BusinessHour(start="11:00", end=datetime.time(20, 0))
In [212]: bh
Out[212]: <BusinessHour: BH=11:00-20:00>
In [213]: pd.Timestamp("2014-08-01 13:00") + bh
Out[213]: Timestamp('2014-08-01 14:00:00')
In [214]: pd.Timestamp("2014-08-01 09:00") + bh
Out[214]: Timestamp('2014-08-01 12:00:00')
In [215]: pd.Timestamp("2014-08-01 18:00") + bh
Out[215]: Timestamp('2014-08-01 19:00:00')
Passing start time later than end represents midnight business hour.
In this case, business hour exceeds midnight and overlap to the next day.
Valid business hours are distinguished by whether it started from valid BusinessDay.
In [216]: bh = pd.offsets.BusinessHour(start="17:00", end="09:00")
In [217]: bh
Out[217]: <BusinessHour: BH=17:00-09:00>
In [218]: pd.Timestamp("2014-08-01 17:00") + bh
Out[218]: Timestamp('2014-08-01 18:00:00')
In [219]: pd.Timestamp("2014-08-01 23:00") + bh
Out[219]: Timestamp('2014-08-02 00:00:00')
# Although 2014-08-02 is Saturday,
# it is valid because it starts from 08-01 (Friday).
In [220]: pd.Timestamp("2014-08-02 04:00") + bh
Out[220]: Timestamp('2014-08-02 05:00:00')
# Although 2014-08-04 is Monday,
# it is out of business hours because it starts from 08-03 (Sunday).
In [221]: pd.Timestamp("2014-08-04 04:00") + bh
Out[221]: Timestamp('2014-08-04 18:00:00')
Applying BusinessHour.rollforward and rollback to out of business hours results in
the next business hour start or previous day’s end. Different from other offsets, BusinessHour.rollforward
may output different results from apply by definition.
This is because one day’s business hour end is equal to next day’s business hour start. For example,
under the default business hours (9:00 - 17:00), there is no gap (0 minutes) between 2014-08-01 17:00 and
2014-08-04 09:00.
# This adjusts a Timestamp to business hour edge
In [222]: pd.offsets.BusinessHour().rollback(pd.Timestamp("2014-08-02 15:00"))
Out[222]: Timestamp('2014-08-01 17:00:00')
In [223]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02 15:00"))
Out[223]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessHour() + pd.Timestamp('2014-08-01 17:00').
# And it is the same as BusinessHour() + pd.Timestamp('2014-08-04 09:00')
In [224]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02 15:00")
Out[224]: Timestamp('2014-08-04 10:00:00')
# BusinessDay results (for reference)
In [225]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02"))
Out[225]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessDay() + pd.Timestamp('2014-08-01')
# The result is the same as rollworward because BusinessDay never overlap.
In [226]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02")
Out[226]: Timestamp('2014-08-04 10:00:00')
BusinessHour regards Saturday and Sunday as holidays. To use arbitrary
holidays, you can use CustomBusinessHour offset, as explained in the
following subsection.
Custom business hour#
The CustomBusinessHour is a mixture of BusinessHour and CustomBusinessDay which
allows you to specify arbitrary holidays. CustomBusinessHour works as the same
as BusinessHour except that it skips specified custom holidays.
In [227]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [228]: bhour_us = pd.offsets.CustomBusinessHour(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [229]: dt = datetime.datetime(2014, 1, 17, 15)
In [230]: dt + bhour_us
Out[230]: Timestamp('2014-01-17 16:00:00')
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [231]: dt + bhour_us * 2
Out[231]: Timestamp('2014-01-21 09:00:00')
You can use keyword arguments supported by either BusinessHour and CustomBusinessDay.
In [232]: bhour_mon = pd.offsets.CustomBusinessHour(start="10:00", weekmask="Tue Wed Thu Fri")
# Monday is skipped because it's a holiday, business hour starts from 10:00
In [233]: dt + bhour_mon * 2
Out[233]: Timestamp('2014-01-21 10:00:00')
Offset aliases#
A number of string aliases are given to useful common time series
frequencies. We will refer to these aliases as offset aliases.
Alias
Description
B
business day frequency
C
custom business day frequency
D
calendar day frequency
W
weekly frequency
M
month end frequency
SM
semi-month end frequency (15th and end of month)
BM
business month end frequency
CBM
custom business month end frequency
MS
month start frequency
SMS
semi-month start frequency (1st and 15th)
BMS
business month start frequency
CBMS
custom business month start frequency
Q
quarter end frequency
BQ
business quarter end frequency
QS
quarter start frequency
BQS
business quarter start frequency
A, Y
year end frequency
BA, BY
business year end frequency
AS, YS
year start frequency
BAS, BYS
business year start frequency
BH
business hour frequency
H
hourly frequency
T, min
minutely frequency
S
secondly frequency
L, ms
milliseconds
U, us
microseconds
N
nanoseconds
Note
When using the offset aliases above, it should be noted that functions
such as date_range(), bdate_range(), will only return
timestamps that are in the interval defined by start_date and
end_date. If the start_date does not correspond to the frequency,
the returned timestamps will start at the next valid timestamp, same for
end_date, the returned timestamps will stop at the previous valid
timestamp.
For example, for the offset MS, if the start_date is not the first
of the month, the returned timestamps will start with the first day of the
next month. If end_date is not the first day of a month, the last
returned timestamp will be the first day of the corresponding month.
In [234]: dates_lst_1 = pd.date_range("2020-01-06", "2020-04-03", freq="MS")
In [235]: dates_lst_1
Out[235]: DatetimeIndex(['2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
In [236]: dates_lst_2 = pd.date_range("2020-01-01", "2020-04-01", freq="MS")
In [237]: dates_lst_2
Out[237]: DatetimeIndex(['2020-01-01', '2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
We can see in the above example date_range() and
bdate_range() will only return the valid timestamps between the
start_date and end_date. If these are not valid timestamps for the
given frequency it will roll to the next value for start_date
(respectively previous for the end_date)
Combining aliases#
As we have seen previously, the alias and the offset instance are fungible in
most functions:
In [238]: pd.date_range(start, periods=5, freq="B")
Out[238]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
In [239]: pd.date_range(start, periods=5, freq=pd.offsets.BDay())
Out[239]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
You can combine together day and intraday offsets:
In [240]: pd.date_range(start, periods=10, freq="2h20min")
Out[240]:
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 02:20:00',
'2011-01-01 04:40:00', '2011-01-01 07:00:00',
'2011-01-01 09:20:00', '2011-01-01 11:40:00',
'2011-01-01 14:00:00', '2011-01-01 16:20:00',
'2011-01-01 18:40:00', '2011-01-01 21:00:00'],
dtype='datetime64[ns]', freq='140T')
In [241]: pd.date_range(start, periods=10, freq="1D10U")
Out[241]:
DatetimeIndex([ '2011-01-01 00:00:00', '2011-01-02 00:00:00.000010',
'2011-01-03 00:00:00.000020', '2011-01-04 00:00:00.000030',
'2011-01-05 00:00:00.000040', '2011-01-06 00:00:00.000050',
'2011-01-07 00:00:00.000060', '2011-01-08 00:00:00.000070',
'2011-01-09 00:00:00.000080', '2011-01-10 00:00:00.000090'],
dtype='datetime64[ns]', freq='86400000010U')
Anchored offsets#
For some frequencies you can specify an anchoring suffix:
Alias
Description
W-SUN
weekly frequency (Sundays). Same as ‘W’
W-MON
weekly frequency (Mondays)
W-TUE
weekly frequency (Tuesdays)
W-WED
weekly frequency (Wednesdays)
W-THU
weekly frequency (Thursdays)
W-FRI
weekly frequency (Fridays)
W-SAT
weekly frequency (Saturdays)
(B)Q(S)-DEC
quarterly frequency, year ends in December. Same as ‘Q’
(B)Q(S)-JAN
quarterly frequency, year ends in January
(B)Q(S)-FEB
quarterly frequency, year ends in February
(B)Q(S)-MAR
quarterly frequency, year ends in March
(B)Q(S)-APR
quarterly frequency, year ends in April
(B)Q(S)-MAY
quarterly frequency, year ends in May
(B)Q(S)-JUN
quarterly frequency, year ends in June
(B)Q(S)-JUL
quarterly frequency, year ends in July
(B)Q(S)-AUG
quarterly frequency, year ends in August
(B)Q(S)-SEP
quarterly frequency, year ends in September
(B)Q(S)-OCT
quarterly frequency, year ends in October
(B)Q(S)-NOV
quarterly frequency, year ends in November
(B)A(S)-DEC
annual frequency, anchored end of December. Same as ‘A’
(B)A(S)-JAN
annual frequency, anchored end of January
(B)A(S)-FEB
annual frequency, anchored end of February
(B)A(S)-MAR
annual frequency, anchored end of March
(B)A(S)-APR
annual frequency, anchored end of April
(B)A(S)-MAY
annual frequency, anchored end of May
(B)A(S)-JUN
annual frequency, anchored end of June
(B)A(S)-JUL
annual frequency, anchored end of July
(B)A(S)-AUG
annual frequency, anchored end of August
(B)A(S)-SEP
annual frequency, anchored end of September
(B)A(S)-OCT
annual frequency, anchored end of October
(B)A(S)-NOV
annual frequency, anchored end of November
These can be used as arguments to date_range, bdate_range, constructors
for DatetimeIndex, as well as various other timeseries-related functions
in pandas.
Anchored offset semantics#
For those offsets that are anchored to the start or end of specific
frequency (MonthEnd, MonthBegin, WeekEnd, etc), the following
rules apply to rolling forward and backwards.
When n is not 0, if the given date is not on an anchor point, it snapped to the next(previous)
anchor point, and moved |n|-1 additional steps forwards or backwards.
In [242]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=1)
Out[242]: Timestamp('2014-02-01 00:00:00')
In [243]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=1)
Out[243]: Timestamp('2014-01-31 00:00:00')
In [244]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=1)
Out[244]: Timestamp('2014-01-01 00:00:00')
In [245]: pd.Timestamp("2014-01-02") - pd.offsets.MonthEnd(n=1)
Out[245]: Timestamp('2013-12-31 00:00:00')
In [246]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=4)
Out[246]: Timestamp('2014-05-01 00:00:00')
In [247]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=4)
Out[247]: Timestamp('2013-10-01 00:00:00')
If the given date is on an anchor point, it is moved |n| points forwards
or backwards.
In [248]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=1)
Out[248]: Timestamp('2014-02-01 00:00:00')
In [249]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=1)
Out[249]: Timestamp('2014-02-28 00:00:00')
In [250]: pd.Timestamp("2014-01-01") - pd.offsets.MonthBegin(n=1)
Out[250]: Timestamp('2013-12-01 00:00:00')
In [251]: pd.Timestamp("2014-01-31") - pd.offsets.MonthEnd(n=1)
Out[251]: Timestamp('2013-12-31 00:00:00')
In [252]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=4)
Out[252]: Timestamp('2014-05-01 00:00:00')
In [253]: pd.Timestamp("2014-01-31") - pd.offsets.MonthBegin(n=4)
Out[253]: Timestamp('2013-10-01 00:00:00')
For the case when n=0, the date is not moved if on an anchor point, otherwise
it is rolled forward to the next anchor point.
In [254]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=0)
Out[254]: Timestamp('2014-02-01 00:00:00')
In [255]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=0)
Out[255]: Timestamp('2014-01-31 00:00:00')
In [256]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=0)
Out[256]: Timestamp('2014-01-01 00:00:00')
In [257]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=0)
Out[257]: Timestamp('2014-01-31 00:00:00')
Holidays / holiday calendars#
Holidays and calendars provide a simple way to define holiday rules to be used
with CustomBusinessDay or in other analysis that requires a predefined
set of holidays. The AbstractHolidayCalendar class provides all the necessary
methods to return a list of holidays and only rules need to be defined
in a specific holiday calendar class. Furthermore, the start_date and end_date
class attributes determine over what date range holidays are generated. These
should be overwritten on the AbstractHolidayCalendar class to have the range
apply to all calendar subclasses. USFederalHolidayCalendar is the
only calendar that exists and primarily serves as an example for developing
other calendars.
For holidays that occur on fixed dates (e.g., US Memorial Day or July 4th) an
observance rule determines when that holiday is observed if it falls on a weekend
or some other non-observed day. Defined observance rules are:
Rule
Description
nearest_workday
move Saturday to Friday and Sunday to Monday
sunday_to_monday
move Sunday to following Monday
next_monday_or_tuesday
move Saturday to Monday and Sunday/Monday to Tuesday
previous_friday
move Saturday and Sunday to previous Friday”
next_monday
move Saturday and Sunday to following Monday
An example of how holidays and holiday calendars are defined:
In [258]: from pandas.tseries.holiday import (
.....: Holiday,
.....: USMemorialDay,
.....: AbstractHolidayCalendar,
.....: nearest_workday,
.....: MO,
.....: )
.....:
In [259]: class ExampleCalendar(AbstractHolidayCalendar):
.....: rules = [
.....: USMemorialDay,
.....: Holiday("July 4th", month=7, day=4, observance=nearest_workday),
.....: Holiday(
.....: "Columbus Day",
.....: month=10,
.....: day=1,
.....: offset=pd.DateOffset(weekday=MO(2)),
.....: ),
.....: ]
.....:
In [260]: cal = ExampleCalendar()
In [261]: cal.holidays(datetime.datetime(2012, 1, 1), datetime.datetime(2012, 12, 31))
Out[261]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
hint
weekday=MO(2) is same as 2 * Week(weekday=2)
Using this calendar, creating an index or doing offset arithmetic skips weekends
and holidays (i.e., Memorial Day/July 4th). For example, the below defines
a custom business day offset using the ExampleCalendar. Like any other offset,
it can be used to create a DatetimeIndex or added to datetime
or Timestamp objects.
In [262]: pd.date_range(
.....: start="7/1/2012", end="7/10/2012", freq=pd.offsets.CDay(calendar=cal)
.....: ).to_pydatetime()
.....:
Out[262]:
array([datetime.datetime(2012, 7, 2, 0, 0),
datetime.datetime(2012, 7, 3, 0, 0),
datetime.datetime(2012, 7, 5, 0, 0),
datetime.datetime(2012, 7, 6, 0, 0),
datetime.datetime(2012, 7, 9, 0, 0),
datetime.datetime(2012, 7, 10, 0, 0)], dtype=object)
In [263]: offset = pd.offsets.CustomBusinessDay(calendar=cal)
In [264]: datetime.datetime(2012, 5, 25) + offset
Out[264]: Timestamp('2012-05-29 00:00:00')
In [265]: datetime.datetime(2012, 7, 3) + offset
Out[265]: Timestamp('2012-07-05 00:00:00')
In [266]: datetime.datetime(2012, 7, 3) + 2 * offset
Out[266]: Timestamp('2012-07-06 00:00:00')
In [267]: datetime.datetime(2012, 7, 6) + offset
Out[267]: Timestamp('2012-07-09 00:00:00')
Ranges are defined by the start_date and end_date class attributes
of AbstractHolidayCalendar. The defaults are shown below.
In [268]: AbstractHolidayCalendar.start_date
Out[268]: Timestamp('1970-01-01 00:00:00')
In [269]: AbstractHolidayCalendar.end_date
Out[269]: Timestamp('2200-12-31 00:00:00')
These dates can be overwritten by setting the attributes as
datetime/Timestamp/string.
In [270]: AbstractHolidayCalendar.start_date = datetime.datetime(2012, 1, 1)
In [271]: AbstractHolidayCalendar.end_date = datetime.datetime(2012, 12, 31)
In [272]: cal.holidays()
Out[272]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
Every calendar class is accessible by name using the get_calendar function
which returns a holiday class instance. Any imported calendar class will
automatically be available by this function. Also, HolidayCalendarFactory
provides an easy interface to create calendars that are combinations of calendars
or calendars with additional rules.
In [273]: from pandas.tseries.holiday import get_calendar, HolidayCalendarFactory, USLaborDay
In [274]: cal = get_calendar("ExampleCalendar")
In [275]: cal.rules
Out[275]:
[Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
In [276]: new_cal = HolidayCalendarFactory("NewExampleCalendar", cal, USLaborDay)
In [277]: new_cal.rules
Out[277]:
[Holiday: Labor Day (month=9, day=1, offset=<DateOffset: weekday=MO(+1)>),
Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
Time Series-related instance methods#
Shifting / lagging#
One may want to shift or lag the values in a time series back and forward in
time. The method for this is shift(), which is available on all of
the pandas objects.
In [278]: ts = pd.Series(range(len(rng)), index=rng)
In [279]: ts = ts[:5]
In [280]: ts.shift(1)
Out[280]:
2012-01-01 NaN
2012-01-02 0.0
2012-01-03 1.0
Freq: D, dtype: float64
The shift method accepts an freq argument which can accept a
DateOffset class or other timedelta-like object or also an
offset alias.
When freq is specified, shift method changes all the dates in the index
rather than changing the alignment of the data and the index:
In [281]: ts.shift(5, freq="D")
Out[281]:
2012-01-06 0
2012-01-07 1
2012-01-08 2
Freq: D, dtype: int64
In [282]: ts.shift(5, freq=pd.offsets.BDay())
Out[282]:
2012-01-06 0
2012-01-09 1
2012-01-10 2
dtype: int64
In [283]: ts.shift(5, freq="BM")
Out[283]:
2012-05-31 0
2012-05-31 1
2012-05-31 2
dtype: int64
Note that with when freq is specified, the leading entry is no longer NaN
because the data is not being realigned.
Frequency conversion#
The primary function for changing frequencies is the asfreq()
method. For a DatetimeIndex, this is basically just a thin, but convenient
wrapper around reindex() which generates a date_range and
calls reindex.
In [284]: dr = pd.date_range("1/1/2010", periods=3, freq=3 * pd.offsets.BDay())
In [285]: ts = pd.Series(np.random.randn(3), index=dr)
In [286]: ts
Out[286]:
2010-01-01 1.494522
2010-01-06 -0.778425
2010-01-11 -0.253355
Freq: 3B, dtype: float64
In [287]: ts.asfreq(pd.offsets.BDay())
Out[287]:
2010-01-01 1.494522
2010-01-04 NaN
2010-01-05 NaN
2010-01-06 -0.778425
2010-01-07 NaN
2010-01-08 NaN
2010-01-11 -0.253355
Freq: B, dtype: float64
asfreq provides a further convenience so you can specify an interpolation
method for any gaps that may appear after the frequency conversion.
In [288]: ts.asfreq(pd.offsets.BDay(), method="pad")
Out[288]:
2010-01-01 1.494522
2010-01-04 1.494522
2010-01-05 1.494522
2010-01-06 -0.778425
2010-01-07 -0.778425
2010-01-08 -0.778425
2010-01-11 -0.253355
Freq: B, dtype: float64
Filling forward / backward#
Related to asfreq and reindex is fillna(), which is
documented in the missing data section.
Converting to Python datetimes#
DatetimeIndex can be converted to an array of Python native
datetime.datetime objects using the to_pydatetime method.
Resampling#
pandas has a simple, powerful, and efficient functionality for performing
resampling operations during frequency conversion (e.g., converting secondly
data into 5-minutely data). This is extremely common in, but not limited to,
financial applications.
resample() is a time-based groupby, followed by a reduction method
on each of its groups. See some cookbook examples for
some advanced strategies.
The resample() method can be used directly from DataFrameGroupBy objects,
see the groupby docs.
Basics#
In [289]: rng = pd.date_range("1/1/2012", periods=100, freq="S")
In [290]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
In [291]: ts.resample("5Min").sum()
Out[291]:
2012-01-01 25103
Freq: 5T, dtype: int64
The resample function is very flexible and allows you to specify many
different parameters to control the frequency conversion and resampling
operation.
Any function available via dispatching is available as
a method of the returned object, including sum, mean, std, sem,
max, min, median, first, last, ohlc:
In [292]: ts.resample("5Min").mean()
Out[292]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [293]: ts.resample("5Min").ohlc()
Out[293]:
open high low close
2012-01-01 308 460 9 205
In [294]: ts.resample("5Min").max()
Out[294]:
2012-01-01 460
Freq: 5T, dtype: int64
For downsampling, closed can be set to ‘left’ or ‘right’ to specify which
end of the interval is closed:
In [295]: ts.resample("5Min", closed="right").mean()
Out[295]:
2011-12-31 23:55:00 308.000000
2012-01-01 00:00:00 250.454545
Freq: 5T, dtype: float64
In [296]: ts.resample("5Min", closed="left").mean()
Out[296]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Parameters like label are used to manipulate the resulting labels.
label specifies whether the result is labeled with the beginning or
the end of the interval.
In [297]: ts.resample("5Min").mean() # by default label='left'
Out[297]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [298]: ts.resample("5Min", label="left").mean()
Out[298]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Warning
The default values for label and closed is ‘left’ for all
frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’
which all have a default of ‘right’.
This might unintendedly lead to looking ahead, where the value for a later
time is pulled back to a previous time as in the following example with
the BusinessDay frequency:
In [299]: s = pd.date_range("2000-01-01", "2000-01-05").to_series()
In [300]: s.iloc[2] = pd.NaT
In [301]: s.dt.day_name()
Out[301]:
2000-01-01 Saturday
2000-01-02 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: D, dtype: object
# default: label='left', closed='left'
In [302]: s.resample("B").last().dt.day_name()
Out[302]:
1999-12-31 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
Notice how the value for Sunday got pulled back to the previous Friday.
To get the behavior where the value for Sunday is pushed to Monday, use
instead
In [303]: s.resample("B", label="right", closed="right").last().dt.day_name()
Out[303]:
2000-01-03 Sunday
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
The axis parameter can be set to 0 or 1 and allows you to resample the
specified axis for a DataFrame.
kind can be set to ‘timestamp’ or ‘period’ to convert the resulting index
to/from timestamp and time span representations. By default resample
retains the input representation.
convention can be set to ‘start’ or ‘end’ when resampling period data
(detail below). It specifies how low frequency periods are converted to higher
frequency periods.
Upsampling#
For upsampling, you can specify a way to upsample and the limit parameter to interpolate over the gaps that are created:
# from secondly to every 250 milliseconds
In [304]: ts[:2].resample("250L").asfreq()
Out[304]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 NaN
2012-01-01 00:00:00.500 NaN
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
In [305]: ts[:2].resample("250L").ffill()
Out[305]:
2012-01-01 00:00:00.000 308
2012-01-01 00:00:00.250 308
2012-01-01 00:00:00.500 308
2012-01-01 00:00:00.750 308
2012-01-01 00:00:01.000 204
Freq: 250L, dtype: int64
In [306]: ts[:2].resample("250L").ffill(limit=2)
Out[306]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 308.0
2012-01-01 00:00:00.500 308.0
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
Sparse resampling#
Sparse timeseries are the ones where you have a lot fewer points relative
to the amount of time you are looking to resample. Naively upsampling a sparse
series can potentially generate lots of intermediate values. When you don’t want
to use a method to fill these values, e.g. fill_method is None, then
intermediate values will be filled with NaN.
Since resample is a time-based groupby, the following is a method to efficiently
resample only the groups that are not all NaN.
In [307]: rng = pd.date_range("2014-1-1", periods=100, freq="D") + pd.Timedelta("1s")
In [308]: ts = pd.Series(range(100), index=rng)
If we want to resample to the full range of the series:
In [309]: ts.resample("3T").sum()
Out[309]:
2014-01-01 00:00:00 0
2014-01-01 00:03:00 0
2014-01-01 00:06:00 0
2014-01-01 00:09:00 0
2014-01-01 00:12:00 0
..
2014-04-09 23:48:00 0
2014-04-09 23:51:00 0
2014-04-09 23:54:00 0
2014-04-09 23:57:00 0
2014-04-10 00:00:00 99
Freq: 3T, Length: 47521, dtype: int64
We can instead only resample those groups where we have points as follows:
In [310]: from functools import partial
In [311]: from pandas.tseries.frequencies import to_offset
In [312]: def round(t, freq):
.....: freq = to_offset(freq)
.....: return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)
.....:
In [313]: ts.groupby(partial(round, freq="3T")).sum()
Out[313]:
2014-01-01 0
2014-01-02 1
2014-01-03 2
2014-01-04 3
2014-01-05 4
..
2014-04-06 95
2014-04-07 96
2014-04-08 97
2014-04-09 98
2014-04-10 99
Length: 100, dtype: int64
Aggregation#
Similar to the aggregating API, groupby API, and the window API,
a Resampler can be selectively resampled.
Resampling a DataFrame, the default will be to act on all columns with the same function.
In [314]: df = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2012", freq="S", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [315]: r = df.resample("3T")
In [316]: r.mean()
Out[316]:
A B C
2012-01-01 00:00:00 -0.033823 -0.121514 -0.081447
2012-01-01 00:03:00 0.056909 0.146731 -0.024320
2012-01-01 00:06:00 -0.058837 0.047046 -0.052021
2012-01-01 00:09:00 0.063123 -0.026158 -0.066533
2012-01-01 00:12:00 0.186340 -0.003144 0.074752
2012-01-01 00:15:00 -0.085954 -0.016287 -0.050046
We can select a specific column or columns using standard getitem.
In [317]: r["A"].mean()
Out[317]:
2012-01-01 00:00:00 -0.033823
2012-01-01 00:03:00 0.056909
2012-01-01 00:06:00 -0.058837
2012-01-01 00:09:00 0.063123
2012-01-01 00:12:00 0.186340
2012-01-01 00:15:00 -0.085954
Freq: 3T, Name: A, dtype: float64
In [318]: r[["A", "B"]].mean()
Out[318]:
A B
2012-01-01 00:00:00 -0.033823 -0.121514
2012-01-01 00:03:00 0.056909 0.146731
2012-01-01 00:06:00 -0.058837 0.047046
2012-01-01 00:09:00 0.063123 -0.026158
2012-01-01 00:12:00 0.186340 -0.003144
2012-01-01 00:15:00 -0.085954 -0.016287
You can pass a list or dict of functions to do aggregation with, outputting a DataFrame:
In [319]: r["A"].agg([np.sum, np.mean, np.std])
Out[319]:
sum mean std
2012-01-01 00:00:00 -6.088060 -0.033823 1.043263
2012-01-01 00:03:00 10.243678 0.056909 1.058534
2012-01-01 00:06:00 -10.590584 -0.058837 0.949264
2012-01-01 00:09:00 11.362228 0.063123 1.028096
2012-01-01 00:12:00 33.541257 0.186340 0.884586
2012-01-01 00:15:00 -8.595393 -0.085954 1.035476
On a resampled DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [320]: r.agg([np.sum, np.mean])
Out[320]:
A ... C
sum mean ... sum mean
2012-01-01 00:00:00 -6.088060 -0.033823 ... -14.660515 -0.081447
2012-01-01 00:03:00 10.243678 0.056909 ... -4.377642 -0.024320
2012-01-01 00:06:00 -10.590584 -0.058837 ... -9.363825 -0.052021
2012-01-01 00:09:00 11.362228 0.063123 ... -11.975895 -0.066533
2012-01-01 00:12:00 33.541257 0.186340 ... 13.455299 0.074752
2012-01-01 00:15:00 -8.595393 -0.085954 ... -5.004580 -0.050046
[6 rows x 6 columns]
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [321]: r.agg({"A": np.sum, "B": lambda x: np.std(x, ddof=1)})
Out[321]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
The function names can also be strings. In order for a string to be valid it
must be implemented on the resampled object:
In [322]: r.agg({"A": "sum", "B": "std"})
Out[322]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
Furthermore, you can also specify multiple aggregation functions for each column separately.
In [323]: r.agg({"A": ["sum", "std"], "B": ["mean", "std"]})
Out[323]:
A B
sum std mean std
2012-01-01 00:00:00 -6.088060 1.043263 -0.121514 1.001294
2012-01-01 00:03:00 10.243678 1.058534 0.146731 1.074597
2012-01-01 00:06:00 -10.590584 0.949264 0.047046 0.987309
2012-01-01 00:09:00 11.362228 1.028096 -0.026158 0.944953
2012-01-01 00:12:00 33.541257 0.884586 -0.003144 1.095025
2012-01-01 00:15:00 -8.595393 1.035476 -0.016287 1.035312
If a DataFrame does not have a datetimelike index, but instead you want
to resample based on datetimelike column in the frame, it can passed to the
on keyword.
In [324]: df = pd.DataFrame(
.....: {"date": pd.date_range("2015-01-01", freq="W", periods=5), "a": np.arange(5)},
.....: index=pd.MultiIndex.from_arrays(
.....: [[1, 2, 3, 4, 5], pd.date_range("2015-01-01", freq="W", periods=5)],
.....: names=["v", "d"],
.....: ),
.....: )
.....:
In [325]: df
Out[325]:
date a
v d
1 2015-01-04 2015-01-04 0
2 2015-01-11 2015-01-11 1
3 2015-01-18 2015-01-18 2
4 2015-01-25 2015-01-25 3
5 2015-02-01 2015-02-01 4
In [326]: df.resample("M", on="date")[["a"]].sum()
Out[326]:
a
date
2015-01-31 6
2015-02-28 4
Similarly, if you instead want to resample by a datetimelike
level of MultiIndex, its name or location can be passed to the
level keyword.
In [327]: df.resample("M", level="d")[["a"]].sum()
Out[327]:
a
d
2015-01-31 6
2015-02-28 4
Iterating through groups#
With the Resampler object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [328]: small = pd.Series(
.....: range(6),
.....: index=pd.to_datetime(
.....: [
.....: "2017-01-01T00:00:00",
.....: "2017-01-01T00:30:00",
.....: "2017-01-01T00:31:00",
.....: "2017-01-01T01:00:00",
.....: "2017-01-01T03:00:00",
.....: "2017-01-01T03:05:00",
.....: ]
.....: ),
.....: )
.....:
In [329]: resampled = small.resample("H")
In [330]: for name, group in resampled:
.....: print("Group: ", name)
.....: print("-" * 27)
.....: print(group, end="\n\n")
.....:
Group: 2017-01-01 00:00:00
---------------------------
2017-01-01 00:00:00 0
2017-01-01 00:30:00 1
2017-01-01 00:31:00 2
dtype: int64
Group: 2017-01-01 01:00:00
---------------------------
2017-01-01 01:00:00 3
dtype: int64
Group: 2017-01-01 02:00:00
---------------------------
Series([], dtype: int64)
Group: 2017-01-01 03:00:00
---------------------------
2017-01-01 03:00:00 4
2017-01-01 03:05:00 5
dtype: int64
See Iterating through groups or Resampler.__iter__ for more.
Use origin or offset to adjust the start of the bins#
New in version 1.1.0.
The bins of the grouping are adjusted based on the beginning of the day of the time series starting point. This works well with frequencies that are multiples of a day (like 30D) or that divide a day evenly (like 90s or 1min). This can create inconsistencies with some frequencies that do not meet this criteria. To change this behavior you can specify a fixed Timestamp with the argument origin.
For example:
In [331]: start, end = "2000-10-01 23:30:00", "2000-10-02 00:30:00"
In [332]: middle = "2000-10-02 00:00:00"
In [333]: rng = pd.date_range(start, end, freq="7min")
In [334]: ts = pd.Series(np.arange(len(rng)) * 3, index=rng)
In [335]: ts
Out[335]:
2000-10-01 23:30:00 0
2000-10-01 23:37:00 3
2000-10-01 23:44:00 6
2000-10-01 23:51:00 9
2000-10-01 23:58:00 12
2000-10-02 00:05:00 15
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
Freq: 7T, dtype: int64
Here we can see that, when using origin with its default value ('start_day'), the result after '2000-10-02 00:00:00' are not identical depending on the start of time series:
In [336]: ts.resample("17min", origin="start_day").sum()
Out[336]:
2000-10-01 23:14:00 0
2000-10-01 23:31:00 9
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
Freq: 17T, dtype: int64
In [337]: ts[middle:end].resample("17min", origin="start_day").sum()
Out[337]:
2000-10-02 00:00:00 33
2000-10-02 00:17:00 45
Freq: 17T, dtype: int64
Here we can see that, when setting origin to 'epoch', the result after '2000-10-02 00:00:00' are identical depending on the start of time series:
In [338]: ts.resample("17min", origin="epoch").sum()
Out[338]:
2000-10-01 23:18:00 0
2000-10-01 23:35:00 18
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
In [339]: ts[middle:end].resample("17min", origin="epoch").sum()
Out[339]:
2000-10-01 23:52:00 15
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
If needed you can use a custom timestamp for origin:
In [340]: ts.resample("17min", origin="2001-01-01").sum()
Out[340]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [341]: ts[middle:end].resample("17min", origin=pd.Timestamp("2001-01-01")).sum()
Out[341]:
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
If needed you can just adjust the bins with an offset Timedelta that would be added to the default origin.
Those two examples are equivalent for this time series:
In [342]: ts.resample("17min", origin="start").sum()
Out[342]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [343]: ts.resample("17min", offset="23h30min").sum()
Out[343]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
Note the use of 'start' for origin on the last example. In that case, origin will be set to the first value of the timeseries.
Backward resample#
New in version 1.3.0.
Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given freq. The backward resample sets closed to 'right' by default since the last value should be considered as the edge point for the last bin.
We can set origin to 'end'. The value for a specific Timestamp index stands for the resample result from the current Timestamp minus freq to the current Timestamp with a right close.
In [344]: ts.resample('17min', origin='end').sum()
Out[344]:
2000-10-01 23:35:00 0
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
Freq: 17T, dtype: int64
Besides, in contrast with the 'start_day' option, end_day is supported. This will set the origin as the ceiling midnight of the largest Timestamp.
In [345]: ts.resample('17min', origin='end_day').sum()
Out[345]:
2000-10-01 23:38:00 3
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
Freq: 17T, dtype: int64
The above result uses 2000-10-02 00:29:00 as the last bin’s right edge since the following computation.
In [346]: ceil_mid = rng.max().ceil('D')
In [347]: freq = pd.offsets.Minute(17)
In [348]: bin_res = ceil_mid - freq * ((ceil_mid - rng.max()) // freq)
In [349]: bin_res
Out[349]: Timestamp('2000-10-02 00:29:00')
Time span representation#
Regular intervals of time are represented by Period objects in pandas while
sequences of Period objects are collected in a PeriodIndex, which can
be created with the convenience function period_range.
Period#
A Period represents a span of time (e.g., a day, a month, a quarter, etc).
You can specify the span via freq keyword using a frequency alias like below.
Because freq represents a span of Period, it cannot be negative like “-3D”.
In [350]: pd.Period("2012", freq="A-DEC")
Out[350]: Period('2012', 'A-DEC')
In [351]: pd.Period("2012-1-1", freq="D")
Out[351]: Period('2012-01-01', 'D')
In [352]: pd.Period("2012-1-1 19:00", freq="H")
Out[352]: Period('2012-01-01 19:00', 'H')
In [353]: pd.Period("2012-1-1 19:00", freq="5H")
Out[353]: Period('2012-01-01 19:00', '5H')
Adding and subtracting integers from periods shifts the period by its own
frequency. Arithmetic is not allowed between Period with different freq (span).
In [354]: p = pd.Period("2012", freq="A-DEC")
In [355]: p + 1
Out[355]: Period('2013', 'A-DEC')
In [356]: p - 3
Out[356]: Period('2009', 'A-DEC')
In [357]: p = pd.Period("2012-01", freq="2M")
In [358]: p + 2
Out[358]: Period('2012-05', '2M')
In [359]: p - 1
Out[359]: Period('2011-11', '2M')
In [360]: p == pd.Period("2012-01", freq="3M")
Out[360]: False
If Period freq is daily or higher (D, H, T, S, L, U, N), offsets and timedelta-like can be added if the result can have the same freq. Otherwise, ValueError will be raised.
In [361]: p = pd.Period("2014-07-01 09:00", freq="H")
In [362]: p + pd.offsets.Hour(2)
Out[362]: Period('2014-07-01 11:00', 'H')
In [363]: p + datetime.timedelta(minutes=120)
Out[363]: Period('2014-07-01 11:00', 'H')
In [364]: p + np.timedelta64(7200, "s")
Out[364]: Period('2014-07-01 11:00', 'H')
In [1]: p + pd.offsets.Minute(5)
Traceback
...
ValueError: Input has different freq from Period(freq=H)
If Period has other frequencies, only the same offsets can be added. Otherwise, ValueError will be raised.
In [365]: p = pd.Period("2014-07", freq="M")
In [366]: p + pd.offsets.MonthEnd(3)
Out[366]: Period('2014-10', 'M')
In [1]: p + pd.offsets.MonthBegin(3)
Traceback
...
ValueError: Input has different freq from Period(freq=M)
Taking the difference of Period instances with the same frequency will
return the number of frequency units between them:
In [367]: pd.Period("2012", freq="A-DEC") - pd.Period("2002", freq="A-DEC")
Out[367]: <10 * YearEnds: month=12>
PeriodIndex and period_range#
Regular sequences of Period objects can be collected in a PeriodIndex,
which can be constructed using the period_range convenience function:
In [368]: prng = pd.period_range("1/1/2011", "1/1/2012", freq="M")
In [369]: prng
Out[369]:
PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05', '2011-06',
'2011-07', '2011-08', '2011-09', '2011-10', '2011-11', '2011-12',
'2012-01'],
dtype='period[M]')
The PeriodIndex constructor can also be used directly:
In [370]: pd.PeriodIndex(["2011-1", "2011-2", "2011-3"], freq="M")
Out[370]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
Passing multiplied frequency outputs a sequence of Period which
has multiplied span.
In [371]: pd.period_range(start="2014-01", freq="3M", periods=4)
Out[371]: PeriodIndex(['2014-01', '2014-04', '2014-07', '2014-10'], dtype='period[3M]')
If start or end are Period objects, they will be used as anchor
endpoints for a PeriodIndex with frequency matching that of the
PeriodIndex constructor.
In [372]: pd.period_range(
.....: start=pd.Period("2017Q1", freq="Q"), end=pd.Period("2017Q2", freq="Q"), freq="M"
.....: )
.....:
Out[372]: PeriodIndex(['2017-03', '2017-04', '2017-05', '2017-06'], dtype='period[M]')
Just like DatetimeIndex, a PeriodIndex can also be used to index pandas
objects:
In [373]: ps = pd.Series(np.random.randn(len(prng)), prng)
In [374]: ps
Out[374]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
PeriodIndex supports addition and subtraction with the same rule as Period.
In [375]: idx = pd.period_range("2014-07-01 09:00", periods=5, freq="H")
In [376]: idx
Out[376]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]')
In [377]: idx + pd.offsets.Hour(2)
Out[377]:
PeriodIndex(['2014-07-01 11:00', '2014-07-01 12:00', '2014-07-01 13:00',
'2014-07-01 14:00', '2014-07-01 15:00'],
dtype='period[H]')
In [378]: idx = pd.period_range("2014-07", periods=5, freq="M")
In [379]: idx
Out[379]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype='period[M]')
In [380]: idx + pd.offsets.MonthEnd(3)
Out[380]: PeriodIndex(['2014-10', '2014-11', '2014-12', '2015-01', '2015-02'], dtype='period[M]')
PeriodIndex has its own dtype named period, refer to Period Dtypes.
Period dtypes#
PeriodIndex has a custom period dtype. This is a pandas extension
dtype similar to the timezone aware dtype (datetime64[ns, tz]).
The period dtype holds the freq attribute and is represented with
period[freq] like period[D] or period[M], using frequency strings.
In [381]: pi = pd.period_range("2016-01-01", periods=3, freq="M")
In [382]: pi
Out[382]: PeriodIndex(['2016-01', '2016-02', '2016-03'], dtype='period[M]')
In [383]: pi.dtype
Out[383]: period[M]
The period dtype can be used in .astype(...). It allows one to change the
freq of a PeriodIndex like .asfreq() and convert a
DatetimeIndex to PeriodIndex like to_period():
# change monthly freq to daily freq
In [384]: pi.astype("period[D]")
Out[384]: PeriodIndex(['2016-01-31', '2016-02-29', '2016-03-31'], dtype='period[D]')
# convert to DatetimeIndex
In [385]: pi.astype("datetime64[ns]")
Out[385]: DatetimeIndex(['2016-01-01', '2016-02-01', '2016-03-01'], dtype='datetime64[ns]', freq='MS')
# convert to PeriodIndex
In [386]: dti = pd.date_range("2011-01-01", freq="M", periods=3)
In [387]: dti
Out[387]: DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'], dtype='datetime64[ns]', freq='M')
In [388]: dti.astype("period[M]")
Out[388]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
PeriodIndex partial string indexing#
PeriodIndex now supports partial string slicing with non-monotonic indexes.
New in version 1.1.0.
You can pass in dates and strings to Series and DataFrame with PeriodIndex, in the same manner as DatetimeIndex. For details, refer to DatetimeIndex Partial String Indexing.
In [389]: ps["2011-01"]
Out[389]: -2.9169013294054507
In [390]: ps[datetime.datetime(2011, 12, 25):]
Out[390]:
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
In [391]: ps["10/31/2011":"12/31/2011"]
Out[391]:
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
Passing a string representing a lower frequency than PeriodIndex returns partial sliced data.
In [392]: ps["2011"]
Out[392]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
In [393]: dfp = pd.DataFrame(
.....: np.random.randn(600, 1),
.....: columns=["A"],
.....: index=pd.period_range("2013-01-01 9:00", periods=600, freq="T"),
.....: )
.....:
In [394]: dfp
Out[394]:
A
2013-01-01 09:00 -0.538468
2013-01-01 09:01 -1.365819
2013-01-01 09:02 -0.969051
2013-01-01 09:03 -0.331152
2013-01-01 09:04 -0.245334
... ...
2013-01-01 18:55 0.522460
2013-01-01 18:56 0.118710
2013-01-01 18:57 0.167517
2013-01-01 18:58 0.922883
2013-01-01 18:59 1.721104
[600 rows x 1 columns]
In [395]: dfp.loc["2013-01-01 10H"]
Out[395]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 10:55 -0.865621
2013-01-01 10:56 -1.167818
2013-01-01 10:57 -2.081748
2013-01-01 10:58 -0.527146
2013-01-01 10:59 0.802298
[60 rows x 1 columns]
As with DatetimeIndex, the endpoints will be included in the result. The example below slices data starting from 10:00 to 11:59.
In [396]: dfp["2013-01-01 10H":"2013-01-01 11H"]
Out[396]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 11:55 -0.590204
2013-01-01 11:56 1.539990
2013-01-01 11:57 -1.224826
2013-01-01 11:58 0.578798
2013-01-01 11:59 -0.685496
[120 rows x 1 columns]
Frequency conversion and resampling with PeriodIndex#
The frequency of Period and PeriodIndex can be converted via the asfreq
method. Let’s start with the fiscal year 2011, ending in December:
In [397]: p = pd.Period("2011", freq="A-DEC")
In [398]: p
Out[398]: Period('2011', 'A-DEC')
We can convert it to a monthly frequency. Using the how parameter, we can
specify whether to return the starting or ending month:
In [399]: p.asfreq("M", how="start")
Out[399]: Period('2011-01', 'M')
In [400]: p.asfreq("M", how="end")
Out[400]: Period('2011-12', 'M')
The shorthands ‘s’ and ‘e’ are provided for convenience:
In [401]: p.asfreq("M", "s")
Out[401]: Period('2011-01', 'M')
In [402]: p.asfreq("M", "e")
Out[402]: Period('2011-12', 'M')
Converting to a “super-period” (e.g., annual frequency is a super-period of
quarterly frequency) automatically returns the super-period that includes the
input period:
In [403]: p = pd.Period("2011-12", freq="M")
In [404]: p.asfreq("A-NOV")
Out[404]: Period('2012', 'A-NOV')
Note that since we converted to an annual frequency that ends the year in
November, the monthly period of December 2011 is actually in the 2012 A-NOV
period.
Period conversions with anchored frequencies are particularly useful for
working with various quarterly data common to economics, business, and other
fields. Many organizations define quarters relative to the month in which their
fiscal year starts and ends. Thus, first quarter of 2011 could start in 2010 or
a few months into 2011. Via anchored frequencies, pandas works for all quarterly
frequencies Q-JAN through Q-DEC.
Q-DEC define regular calendar quarters:
In [405]: p = pd.Period("2012Q1", freq="Q-DEC")
In [406]: p.asfreq("D", "s")
Out[406]: Period('2012-01-01', 'D')
In [407]: p.asfreq("D", "e")
Out[407]: Period('2012-03-31', 'D')
Q-MAR defines fiscal year end in March:
In [408]: p = pd.Period("2011Q4", freq="Q-MAR")
In [409]: p.asfreq("D", "s")
Out[409]: Period('2011-01-01', 'D')
In [410]: p.asfreq("D", "e")
Out[410]: Period('2011-03-31', 'D')
Converting between representations#
Timestamped data can be converted to PeriodIndex-ed data using to_period
and vice-versa using to_timestamp:
In [411]: rng = pd.date_range("1/1/2012", periods=5, freq="M")
In [412]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [413]: ts
Out[413]:
2012-01-31 1.931253
2012-02-29 -0.184594
2012-03-31 0.249656
2012-04-30 -0.978151
2012-05-31 -0.873389
Freq: M, dtype: float64
In [414]: ps = ts.to_period()
In [415]: ps
Out[415]:
2012-01 1.931253
2012-02 -0.184594
2012-03 0.249656
2012-04 -0.978151
2012-05 -0.873389
Freq: M, dtype: float64
In [416]: ps.to_timestamp()
Out[416]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Remember that ‘s’ and ‘e’ can be used to return the timestamps at the start or
end of the period:
In [417]: ps.to_timestamp("D", how="s")
Out[417]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic
functions to be used. In the following example, we convert a quarterly
frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [418]: prng = pd.period_range("1990Q1", "2000Q4", freq="Q-NOV")
In [419]: ts = pd.Series(np.random.randn(len(prng)), prng)
In [420]: ts.index = (prng.asfreq("M", "e") + 1).asfreq("H", "s") + 9
In [421]: ts.head()
Out[421]:
1990-03-01 09:00 -0.109291
1990-06-01 09:00 -0.637235
1990-09-01 09:00 -1.735925
1990-12-01 09:00 2.096946
1991-03-01 09:00 -1.039926
Freq: H, dtype: float64
Representing out-of-bounds spans#
If you have data that is outside of the Timestamp bounds, see Timestamp limitations,
then you can use a PeriodIndex and/or Series of Periods to do computations.
In [422]: span = pd.period_range("1215-01-01", "1381-01-01", freq="D")
In [423]: span
Out[423]:
PeriodIndex(['1215-01-01', '1215-01-02', '1215-01-03', '1215-01-04',
'1215-01-05', '1215-01-06', '1215-01-07', '1215-01-08',
'1215-01-09', '1215-01-10',
...
'1380-12-23', '1380-12-24', '1380-12-25', '1380-12-26',
'1380-12-27', '1380-12-28', '1380-12-29', '1380-12-30',
'1380-12-31', '1381-01-01'],
dtype='period[D]', length=60632)
To convert from an int64 based YYYYMMDD representation.
In [424]: s = pd.Series([20121231, 20141130, 99991231])
In [425]: s
Out[425]:
0 20121231
1 20141130
2 99991231
dtype: int64
In [426]: def conv(x):
.....: return pd.Period(year=x // 10000, month=x // 100 % 100, day=x % 100, freq="D")
.....:
In [427]: s.apply(conv)
Out[427]:
0 2012-12-31
1 2014-11-30
2 9999-12-31
dtype: period[D]
In [428]: s.apply(conv)[2]
Out[428]: Period('9999-12-31', 'D')
These can easily be converted to a PeriodIndex:
In [429]: span = pd.PeriodIndex(s.apply(conv))
In [430]: span
Out[430]: PeriodIndex(['2012-12-31', '2014-11-30', '9999-12-31'], dtype='period[D]')
Time zone handling#
pandas provides rich support for working with timestamps in different time
zones using the pytz and dateutil libraries or datetime.timezone
objects from the standard library.
Working with time zones#
By default, pandas objects are time zone unaware:
In [431]: rng = pd.date_range("3/6/2012 00:00", periods=15, freq="D")
In [432]: rng.tz is None
Out[432]: True
To localize these dates to a time zone (assign a particular time zone to a naive date),
you can use the tz_localize method or the tz keyword argument in
date_range(), Timestamp, or DatetimeIndex.
You can either pass pytz or dateutil time zone objects or Olson time zone database strings.
Olson time zone strings will return pytz time zone objects by default.
To return dateutil time zone objects, append dateutil/ before the string.
In pytz you can find a list of common (and less common) time zones using
from pytz import common_timezones, all_timezones.
dateutil uses the OS time zones so there isn’t a fixed list available. For
common zones, the names are the same as pytz.
In [433]: import dateutil
# pytz
In [434]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz="Europe/London")
In [435]: rng_pytz.tz
Out[435]: <DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD>
# dateutil
In [436]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [437]: rng_dateutil = rng_dateutil.tz_localize("dateutil/Europe/London")
In [438]: rng_dateutil.tz
Out[438]: tzfile('/usr/share/zoneinfo/Europe/London')
# dateutil - utc special case
In [439]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=dateutil.tz.tzutc(),
.....: )
.....:
In [440]: rng_utc.tz
Out[440]: tzutc()
New in version 0.25.0.
# datetime.timezone
In [441]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=datetime.timezone.utc,
.....: )
.....:
In [442]: rng_utc.tz
Out[442]: datetime.timezone.utc
Note that the UTC time zone is a special case in dateutil and should be constructed explicitly
as an instance of dateutil.tz.tzutc. You can also construct other time
zones objects explicitly first.
In [443]: import pytz
# pytz
In [444]: tz_pytz = pytz.timezone("Europe/London")
In [445]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [446]: rng_pytz = rng_pytz.tz_localize(tz_pytz)
In [447]: rng_pytz.tz == tz_pytz
Out[447]: True
# dateutil
In [448]: tz_dateutil = dateutil.tz.gettz("Europe/London")
In [449]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz=tz_dateutil)
In [450]: rng_dateutil.tz == tz_dateutil
Out[450]: True
To convert a time zone aware pandas object from one time zone to another,
you can use the tz_convert method.
In [451]: rng_pytz.tz_convert("US/Eastern")
Out[451]:
DatetimeIndex(['2012-03-05 19:00:00-05:00', '2012-03-06 19:00:00-05:00',
'2012-03-07 19:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Note
When using pytz time zones, DatetimeIndex will construct a different
time zone object than a Timestamp for the same time zone input. A DatetimeIndex
can hold a collection of Timestamp objects that may have different UTC offsets and cannot be
succinctly represented by one pytz time zone instance while one Timestamp
represents one point in time with a specific UTC offset.
In [452]: dti = pd.date_range("2019-01-01", periods=3, freq="D", tz="US/Pacific")
In [453]: dti.tz
Out[453]: <DstTzInfo 'US/Pacific' LMT-1 day, 16:07:00 STD>
In [454]: ts = pd.Timestamp("2019-01-01", tz="US/Pacific")
In [455]: ts.tz
Out[455]: <DstTzInfo 'US/Pacific' PST-1 day, 16:00:00 STD>
Warning
Be wary of conversions between libraries. For some time zones, pytz and dateutil have different
definitions of the zone. This is more of a problem for unusual time zones than for
‘standard’ zones like US/Eastern.
Warning
Be aware that a time zone definition across versions of time zone libraries may not
be considered equal. This may cause problems when working with stored data that
is localized using one version and operated on with a different version.
See here for how to handle such a situation.
Warning
For pytz time zones, it is incorrect to pass a time zone object directly into
the datetime.datetime constructor
(e.g., datetime.datetime(2011, 1, 1, tzinfo=pytz.timezone('US/Eastern')).
Instead, the datetime needs to be localized using the localize method
on the pytz time zone object.
Warning
Be aware that for times in the future, correct conversion between time zones
(and UTC) cannot be guaranteed by any time zone library because a timezone’s
offset from UTC may be changed by the respective government.
Warning
If you are using dates beyond 2038-01-18, due to current deficiencies
in the underlying libraries caused by the year 2038 problem, daylight saving time (DST) adjustments
to timezone aware dates will not be applied. If and when the underlying libraries are fixed,
the DST transitions will be applied.
For example, for two dates that are in British Summer Time (and so would normally be GMT+1), both the following asserts evaluate as true:
In [456]: d_2037 = "2037-03-31T010101"
In [457]: d_2038 = "2038-03-31T010101"
In [458]: DST = "Europe/London"
In [459]: assert pd.Timestamp(d_2037, tz=DST) != pd.Timestamp(d_2037, tz="GMT")
In [460]: assert pd.Timestamp(d_2038, tz=DST) == pd.Timestamp(d_2038, tz="GMT")
Under the hood, all timestamps are stored in UTC. Values from a time zone aware
DatetimeIndex or Timestamp will have their fields (day, hour, minute, etc.)
localized to the time zone. However, timestamps with the same UTC value are
still considered to be equal even if they are in different time zones:
In [461]: rng_eastern = rng_utc.tz_convert("US/Eastern")
In [462]: rng_berlin = rng_utc.tz_convert("Europe/Berlin")
In [463]: rng_eastern[2]
Out[463]: Timestamp('2012-03-07 19:00:00-0500', tz='US/Eastern', freq='D')
In [464]: rng_berlin[2]
Out[464]: Timestamp('2012-03-08 01:00:00+0100', tz='Europe/Berlin', freq='D')
In [465]: rng_eastern[2] == rng_berlin[2]
Out[465]: True
Operations between Series in different time zones will yield UTC
Series, aligning the data on the UTC timestamps:
In [466]: ts_utc = pd.Series(range(3), pd.date_range("20130101", periods=3, tz="UTC"))
In [467]: eastern = ts_utc.tz_convert("US/Eastern")
In [468]: berlin = ts_utc.tz_convert("Europe/Berlin")
In [469]: result = eastern + berlin
In [470]: result
Out[470]:
2013-01-01 00:00:00+00:00 0
2013-01-02 00:00:00+00:00 2
2013-01-03 00:00:00+00:00 4
Freq: D, dtype: int64
In [471]: result.index
Out[471]:
DatetimeIndex(['2013-01-01 00:00:00+00:00', '2013-01-02 00:00:00+00:00',
'2013-01-03 00:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='D')
To remove time zone information, use tz_localize(None) or tz_convert(None).
tz_localize(None) will remove the time zone yielding the local time representation.
tz_convert(None) will remove the time zone after converting to UTC time.
In [472]: didx = pd.date_range(start="2014-08-01 09:00", freq="H", periods=3, tz="US/Eastern")
In [473]: didx
Out[473]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [474]: didx.tz_localize(None)
Out[474]:
DatetimeIndex(['2014-08-01 09:00:00', '2014-08-01 10:00:00',
'2014-08-01 11:00:00'],
dtype='datetime64[ns]', freq=None)
In [475]: didx.tz_convert(None)
Out[475]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq='H')
# tz_convert(None) is identical to tz_convert('UTC').tz_localize(None)
In [476]: didx.tz_convert("UTC").tz_localize(None)
Out[476]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq=None)
Fold#
New in version 1.1.0.
For ambiguous times, pandas supports explicitly specifying the keyword-only fold argument.
Due to daylight saving time, one wall clock time can occur twice when shifting
from summer to winter time; fold describes whether the datetime-like corresponds
to the first (0) or the second time (1) the wall clock hits the ambiguous time.
Fold is supported only for constructing from naive datetime.datetime
(see datetime documentation for details) or from Timestamp
or for constructing from components (see below). Only dateutil timezones are supported
(see dateutil documentation
for dateutil methods that deal with ambiguous datetimes) as pytz
timezones do not support fold (see pytz documentation
for details on how pytz deals with ambiguous datetimes). To localize an ambiguous datetime
with pytz, please use Timestamp.tz_localize(). In general, we recommend to rely
on Timestamp.tz_localize() when localizing ambiguous datetimes if you need direct
control over how they are handled.
In [477]: pd.Timestamp(
.....: datetime.datetime(2019, 10, 27, 1, 30, 0, 0),
.....: tz="dateutil/Europe/London",
.....: fold=0,
.....: )
.....:
Out[477]: Timestamp('2019-10-27 01:30:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London')
In [478]: pd.Timestamp(
.....: year=2019,
.....: month=10,
.....: day=27,
.....: hour=1,
.....: minute=30,
.....: tz="dateutil/Europe/London",
.....: fold=1,
.....: )
.....:
Out[478]: Timestamp('2019-10-27 01:30:00+0000', tz='dateutil//usr/share/zoneinfo/Europe/London')
Ambiguous times when localizing#
tz_localize may not be able to determine the UTC offset of a timestamp
because daylight savings time (DST) in a local time zone causes some times to occur
twice within one day (“clocks fall back”). The following options are available:
'raise': Raises a pytz.AmbiguousTimeError (the default behavior)
'infer': Attempt to determine the correct offset base on the monotonicity of the timestamps
'NaT': Replaces ambiguous times with NaT
bool: True represents a DST time, False represents non-DST time. An array-like of bool values is supported for a sequence of times.
In [479]: rng_hourly = pd.DatetimeIndex(
.....: ["11/06/2011 00:00", "11/06/2011 01:00", "11/06/2011 01:00", "11/06/2011 02:00"]
.....: )
.....:
This will fail as there are ambiguous times ('11/06/2011 01:00')
In [2]: rng_hourly.tz_localize('US/Eastern')
AmbiguousTimeError: Cannot infer dst time from Timestamp('2011-11-06 01:00:00'), try using the 'ambiguous' argument
Handle these ambiguous times by specifying the following.
In [480]: rng_hourly.tz_localize("US/Eastern", ambiguous="infer")
Out[480]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [481]: rng_hourly.tz_localize("US/Eastern", ambiguous="NaT")
Out[481]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', 'NaT', 'NaT',
'2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [482]: rng_hourly.tz_localize("US/Eastern", ambiguous=[True, True, False, False])
Out[482]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Nonexistent times when localizing#
A DST transition may also shift the local time ahead by 1 hour creating nonexistent
local times (“clocks spring forward”). The behavior of localizing a timeseries with nonexistent times
can be controlled by the nonexistent argument. The following options are available:
'raise': Raises a pytz.NonExistentTimeError (the default behavior)
'NaT': Replaces nonexistent times with NaT
'shift_forward': Shifts nonexistent times forward to the closest real time
'shift_backward': Shifts nonexistent times backward to the closest real time
timedelta object: Shifts nonexistent times by the timedelta duration
In [483]: dti = pd.date_range(start="2015-03-29 02:30:00", periods=3, freq="H")
# 2:30 is a nonexistent time
Localization of nonexistent times will raise an error by default.
In [2]: dti.tz_localize('Europe/Warsaw')
NonExistentTimeError: 2015-03-29 02:30:00
Transform nonexistent times to NaT or shift the times.
In [484]: dti
Out[484]:
DatetimeIndex(['2015-03-29 02:30:00', '2015-03-29 03:30:00',
'2015-03-29 04:30:00'],
dtype='datetime64[ns]', freq='H')
In [485]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_forward")
Out[485]:
DatetimeIndex(['2015-03-29 03:00:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [486]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_backward")
Out[486]:
DatetimeIndex(['2015-03-29 01:59:59.999999999+01:00',
'2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [487]: dti.tz_localize("Europe/Warsaw", nonexistent=pd.Timedelta(1, unit="H"))
Out[487]:
DatetimeIndex(['2015-03-29 03:30:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [488]: dti.tz_localize("Europe/Warsaw", nonexistent="NaT")
Out[488]:
DatetimeIndex(['NaT', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
Time zone Series operations#
A Series with time zone naive values is
represented with a dtype of datetime64[ns].
In [489]: s_naive = pd.Series(pd.date_range("20130101", periods=3))
In [490]: s_naive
Out[490]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
dtype: datetime64[ns]
A Series with a time zone aware values is
represented with a dtype of datetime64[ns, tz] where tz is the time zone
In [491]: s_aware = pd.Series(pd.date_range("20130101", periods=3, tz="US/Eastern"))
In [492]: s_aware
Out[492]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Both of these Series time zone information
can be manipulated via the .dt accessor, see the dt accessor section.
For example, to localize and convert a naive stamp to time zone aware.
In [493]: s_naive.dt.tz_localize("UTC").dt.tz_convert("US/Eastern")
Out[493]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Time zone information can also be manipulated using the astype method.
This method can convert between different timezone-aware dtypes.
# convert to a new time zone
In [494]: s_aware.astype("datetime64[ns, CET]")
Out[494]:
0 2013-01-01 06:00:00+01:00
1 2013-01-02 06:00:00+01:00
2 2013-01-03 06:00:00+01:00
dtype: datetime64[ns, CET]
Note
Using Series.to_numpy() on a Series, returns a NumPy array of the data.
NumPy does not currently support time zones (even though it is printing in the local time zone!),
therefore an object array of Timestamps is returned for time zone aware data:
In [495]: s_naive.to_numpy()
Out[495]:
array(['2013-01-01T00:00:00.000000000', '2013-01-02T00:00:00.000000000',
'2013-01-03T00:00:00.000000000'], dtype='datetime64[ns]')
In [496]: s_aware.to_numpy()
Out[496]:
array([Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-02 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')],
dtype=object)
By converting to an object array of Timestamps, it preserves the time zone
information. For example, when converting back to a Series:
In [497]: pd.Series(s_aware.to_numpy())
Out[497]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
However, if you want an actual NumPy datetime64[ns] array (with the values
converted to UTC) instead of an array of objects, you can specify the
dtype argument:
In [498]: s_aware.to_numpy(dtype="datetime64[ns]")
Out[498]:
array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')
| 914
| 1,334
|
Get list of of Hours Between Two Datetime Variables
I have a dataframe that looks like this:
Date Name Provider Task StartDateTime LastDateTime
2020-01-01 00:00:00 Bob PEM ED A 7a-4p 2020-01-01 07:00:00 2020-01-01 16:00:00
2020-01-02 00:00:00 Tom PEM ED C 10p-2a 2020-01-02 22:00:00 2020-01-03 02:00:00
I would like to list the number of hours between each person's StartDateTime LastDateTime(datetime64[ns]) and then create an updated dataframe to reflect said lists. So for example, the updated dataframe would look like this:
Name Date Hour
Bob 2020-01-01 7
Bob 2020-01-01 8
Bob 2020-01-01 9
...
Tom 2020-01-02 22
Tom 2020-01-02 23
Tom 2020-01-03 0
Tom 2020-01-03 1
...
I honestly do not have a solid idea where to start, I've found some articles that may provide a foundation but I'm not sure how to adapt my query to the below code since I want the counts based on the row and hour values.
def daterange(date1, date2):
for n in range(int ((date2 - date1).days)+1):
yield date1 + timedelta(n)
start_dt = date(2015, 12, 20)
end_dt = date(2016, 1, 11)
for dt in daterange(start_dt, end_dt):
print(dt.strftime("%Y-%m-%d"))
https://www.w3resource.com/python-exercises/date-time-exercise/python-date-time-exercise-50.php
|
66,038,113
|
How to filter pandas dataframe which contains a mixture of string and float?
|
<p><strong>Data</strong></p>
<pre><code> {'G1': {0: '1927 (57.83%)',
1: '45.00 [39.00 - 51.00]',
2: '909 (27.28%)',
3: '2029 (60.89%)',
4: '393 (11.79%)',
5: '1 (0.03%)',
6: '577 (17.32%)',
7: '0 (0.00%)',
8: '0 (0.00%)',
9: '2257 (67.74%)',
10: '0 (0.00%)',
11: '0 (0.00%)',
12: '0 (0.00%)',
13: np.nan,
14: '0 (0.00%)',
15: '1019 (30.58%)',
16: '802 (24.07%)',
17: '1091 (32.74%)',
18: '420 (12.61%)',
19: np.nan,
20: '0 (0.00%)',
21: '1433 (43.01%)',
22: '1001 (30.04%)',
23: '898 (26.95%)',
24: '733 (22.00%)',
25: '0 (0.00%)',
26: '0 (0.00%)',
27: '0 (0.00%)',
28: '0 (0.00%)',
29: '0 (0.00%)',
30: '0 (0.00%)',
31: '1177 (35.32%)',
32: '0 (0.00%)',
33: '369 (11.07%)',
34: '1 (0.03%)',
35: '150 (4.50%)',
36: '0 (0.00%)',
37: '0 (0.00%)',
38: '675 (20.26%)',
39: '227 (6.81%)',
40: '0 (0.00%)',
41: np.nan,
42: '0 (0.00%)',
43: '0 (0.00%)',
44: '0 (0.00%)',
45: '0 (0.00%)',
46: '0.08 [0.05 - 0.14]',
47: '0.08 [0.05 - 0.14]',
48: '0.08 [0.05 - 0.13]',
49: '0.08 [0.05 - 0.13]',
50: '0.08 [0.05 - 0.14]',
51: '0.09 [0.05 - 0.14]',
52: '0.08 [0.05 - 0.13]',
53: '0.08 [0.05 - 0.13]',
54: '0.07 [0.04 - 0.13]',
55: '0.07 [0.05 - 0.13]',
56: '3.98 [3.63 - 4.28]',
57: '4.03 [3.73 - 4.33]',
58: '4.00 [3.68 - 4.29]',
59: '4.25 [3.91 - 4.50]',
60: '4.00 [3.64 - 4.29]',
61: '4.00 [3.62 - 4.30]',
62: '4.07 [3.79 - 4.33]',
63: '4.22 [3.92 - 4.52]',
64: '3.92 [3.58 - 4.22]',
65: '4.29 [4.00 - 4.58]',
66: '3.83 [3.50 - 4.17]',
67: '3.92 [3.55 - 4.25]',
68: '4.00 [3.67 - 4.32]',
69: '3.95 [3.63 - 4.24]',
70: '4.00 [3.61 - 4.32]',
71: '1.44 [1.14 - 1.89]'},
'p_value': {0: '0.136',
1: '<0.001',
2: '<0.001',
3: '0.015',
4: '0.132',
5: '0.634',
6: '<0.001',
7: '<0.001',
8: '<0.001',
9: '0.033',
10: '0.605',
11: '<0.001',
12: '<0.001',
13: np.nan,
14: '<0.001',
15: '<0.001',
16: '0.106',
17: '<0.001',
18: '<0.001',
19: np.nan,
20: '<0.001',
21: '<0.001',
22: '<0.001',
23: '<0.001',
24: '<0.001',
25: '<0.001',
26: '0.366',
27: '<0.001',
28: '<0.001',
29: '<0.001',
30: '<0.001',
31: '<0.001',
32: '<0.001',
33: '<0.001',
34: '<0.001',
35: '<0.001',
36: '0.002',
37: '0.221',
38: '<0.001',
39: '<0.001',
40: '<0.001',
41: np.nan,
42: '<0.001',
43: '0.221',
44: '<0.001',
45: '0.081',
46: '0.315',
47: '0.084',
48: '0.170',
49: '0.002',
50: '<0.001',
51: '0.544',
52: '<0.001',
53: '0.011',
54: '0.340',
55: '0.194',
56: '0.366',
57: '0.450',
58: '0.627',
59: '0.851',
60: '0.975',
61: '0.465',
62: '0.047',
63: '0.912',
64: '0.006',
65: '<0.001',
66: '0.220',
67: '0.923',
68: '0.038',
69: '0.610',
70: '0.259',
71: '0.021'}}
</code></pre>
<p>How can I filter the data above such that the result contains the p_value of <= 0.05 or equals to '<0.001'?</p>
<p>The result should be something like this</p>
<p><a href="https://i.stack.imgur.com/Z69zu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z69zu.png" alt="enter image description here" /></a></p>
| 66,038,144
| 2021-02-04T01:24:00.170000
| 2
| null | 1
| 61
|
pandas
|
<p>convert the p_value as float type first, then filter with <code>p_value <= 0.05</code></p>
<pre><code>p_value = df['p_value'].str.strip('<').astype(np.float)
df[p_value <= 0.05]
</code></pre>
<p>result:</p>
<pre><code> G1 p_value
1 45.00 [39.00 - 51.00] <0.001
2 909 (27.28%) <0.001
3 2029 (60.89%) 0.015
6 577 (17.32%) <0.001
7 0 (0.00%) <0.001
8 0 (0.00%) <0.001
9 2257 (67.74%) 0.033
11 0 (0.00%) <0.001
12 0 (0.00%) <0.001
14 0 (0.00%) <0.001
15 1019 (30.58%) <0.001
17 1091 (32.74%) <0.001
18 420 (12.61%) <0.001
20 0 (0.00%) <0.001
21 1433 (43.01%) <0.001
22 1001 (30.04%) <0.001
23 898 (26.95%) <0.001
24 733 (22.00%) <0.001
25 0 (0.00%) <0.001
27 0 (0.00%) <0.001
28 0 (0.00%) <0.001
29 0 (0.00%) <0.001
30 0 (0.00%) <0.001
31 1177 (35.32%) <0.001
32 0 (0.00%) <0.001
33 369 (11.07%) <0.001
34 1 (0.03%) <0.001
35 150 (4.50%) <0.001
36 0 (0.00%) 0.002
38 675 (20.26%) <0.001
39 227 (6.81%) <0.001
40 0 (0.00%) <0.001
42 0 (0.00%) <0.001
44 0 (0.00%) <0.001
49 0.08 [0.05 - 0.13] 0.002
50 0.08 [0.05 - 0.14] <0.001
52 0.08 [0.05 - 0.13] <0.001
53 0.08 [0.05 - 0.13] 0.011
62 4.07 [3.79 - 4.33] 0.047
64 3.92 [3.58 - 4.22] 0.006
65 4.29 [4.00 - 4.58] <0.001
68 4.00 [3.67 - 4.32] 0.038
71 1.44 [1.14 - 1.89] 0.021
</code></pre>
| 2021-02-04T01:29:59.207000
| 2
|
https://pandas.pydata.org/docs/user_guide/io.html
|
convert the p_value as float type first, then filter with p_value <= 0.05
p_value = df['p_value'].str.strip('<').astype(np.float)
df[p_value <= 0.05]
result:
G1 p_value
1 45.00 [39.00 - 51.00] <0.001
2 909 (27.28%) <0.001
3 2029 (60.89%) 0.015
6 577 (17.32%) <0.001
7 0 (0.00%) <0.001
8 0 (0.00%) <0.001
9 2257 (67.74%) 0.033
11 0 (0.00%) <0.001
12 0 (0.00%) <0.001
14 0 (0.00%) <0.001
15 1019 (30.58%) <0.001
17 1091 (32.74%) <0.001
18 420 (12.61%) <0.001
20 0 (0.00%) <0.001
21 1433 (43.01%) <0.001
22 1001 (30.04%) <0.001
23 898 (26.95%) <0.001
24 733 (22.00%) <0.001
25 0 (0.00%) <0.001
27 0 (0.00%) <0.001
28 0 (0.00%) <0.001
29 0 (0.00%) <0.001
30 0 (0.00%) <0.001
31 1177 (35.32%) <0.001
32 0 (0.00%) <0.001
33 369 (11.07%) <0.001
34 1 (0.03%) <0.001
35 150 (4.50%) <0.001
36 0 (0.00%) 0.002
38 675 (20.26%) <0.001
39 227 (6.81%) <0.001
40 0 (0.00%) <0.001
42 0 (0.00%) <0.001
44 0 (0.00%) <0.001
49 0.08 [0.05 - 0.13] 0.002
50 0.08 [0.05 - 0.14] <0.001
52 0.08 [0.05 - 0.13] <0.001
53 0.08 [0.05 - 0.13] 0.011
62 4.07 [3.79 - 4.33] 0.047
64 3.92 [3.58 - 4.22] 0.006
65 4.29 [4.00 - 4.58] <0.001
68 4.00 [3.67 - 4.32] 0.038
71 1.44 [1.14 - 1.89] 0.021
| 0
| 1,653
|
How to filter pandas dataframe which contains a mixture of string and float?
Data
{'G1': {0: '1927 (57.83%)',
1: '45.00 [39.00 - 51.00]',
2: '909 (27.28%)',
3: '2029 (60.89%)',
4: '393 (11.79%)',
5: '1 (0.03%)',
6: '577 (17.32%)',
7: '0 (0.00%)',
8: '0 (0.00%)',
9: '2257 (67.74%)',
10: '0 (0.00%)',
11: '0 (0.00%)',
12: '0 (0.00%)',
13: np.nan,
14: '0 (0.00%)',
15: '1019 (30.58%)',
16: '802 (24.07%)',
17: '1091 (32.74%)',
18: '420 (12.61%)',
19: np.nan,
20: '0 (0.00%)',
21: '1433 (43.01%)',
22: '1001 (30.04%)',
23: '898 (26.95%)',
24: '733 (22.00%)',
25: '0 (0.00%)',
26: '0 (0.00%)',
27: '0 (0.00%)',
28: '0 (0.00%)',
29: '0 (0.00%)',
30: '0 (0.00%)',
31: '1177 (35.32%)',
32: '0 (0.00%)',
33: '369 (11.07%)',
34: '1 (0.03%)',
35: '150 (4.50%)',
36: '0 (0.00%)',
37: '0 (0.00%)',
38: '675 (20.26%)',
39: '227 (6.81%)',
40: '0 (0.00%)',
41: np.nan,
42: '0 (0.00%)',
43: '0 (0.00%)',
44: '0 (0.00%)',
45: '0 (0.00%)',
46: '0.08 [0.05 - 0.14]',
47: '0.08 [0.05 - 0.14]',
48: '0.08 [0.05 - 0.13]',
49: '0.08 [0.05 - 0.13]',
50: '0.08 [0.05 - 0.14]',
51: '0.09 [0.05 - 0.14]',
52: '0.08 [0.05 - 0.13]',
53: '0.08 [0.05 - 0.13]',
54: '0.07 [0.04 - 0.13]',
55: '0.07 [0.05 - 0.13]',
56: '3.98 [3.63 - 4.28]',
57: '4.03 [3.73 - 4.33]',
58: '4.00 [3.68 - 4.29]',
59: '4.25 [3.91 - 4.50]',
60: '4.00 [3.64 - 4.29]',
61: '4.00 [3.62 - 4.30]',
62: '4.07 [3.79 - 4.33]',
63: '4.22 [3.92 - 4.52]',
64: '3.92 [3.58 - 4.22]',
65: '4.29 [4.00 - 4.58]',
66: '3.83 [3.50 - 4.17]',
67: '3.92 [3.55 - 4.25]',
68: '4.00 [3.67 - 4.32]',
69: '3.95 [3.63 - 4.24]',
70: '4.00 [3.61 - 4.32]',
71: '1.44 [1.14 - 1.89]'},
'p_value': {0: '0.136',
1: '<0.001',
2: '<0.001',
3: '0.015',
4: '0.132',
5: '0.634',
6: '<0.001',
7: '<0.001',
8: '<0.001',
9: '0.033',
10: '0.605',
11: '<0.001',
12: '<0.001',
13: np.nan,
14: '<0.001',
15: '<0.001',
16: '0.106',
17: '<0.001',
18: '<0.001',
19: np.nan,
20: '<0.001',
21: '<0.001',
22: '<0.001',
23: '<0.001',
24: '<0.001',
25: '<0.001',
26: '0.366',
27: '<0.001',
28: '<0.001',
29: '<0.001',
30: '<0.001',
31: '<0.001',
32: '<0.001',
33: '<0.001',
34: '<0.001',
35: '<0.001',
36: '0.002',
37: '0.221',
38: '<0.001',
39: '<0.001',
40: '<0.001',
41: np.nan,
42: '<0.001',
43: '0.221',
44: '<0.001',
45: '0.081',
46: '0.315',
47: '0.084',
48: '0.170',
49: '0.002',
50: '<0.001',
51: '0.544',
52: '<0.001',
53: '0.011',
54: '0.340',
55: '0.194',
56: '0.366',
57: '0.450',
58: '0.627',
59: '0.851',
60: '0.975',
61: '0.465',
62: '0.047',
63: '0.912',
64: '0.006',
65: '<0.001',
66: '0.220',
67: '0.923',
68: '0.038',
69: '0.610',
70: '0.259',
71: '0.021'}}
How can I filter the data above such that the result contains the p_value of <= 0.05 or equals to '<0.001'?
The result should be something like this
|
70,443,139
|
Trying to make pandas python code shorter
|
<p>I'm fairly new to python and pandas. This code does exactly what I want it to do</p>
<pre><code>dfmcomp=dfm.loc[dfm[ProgSel]=='C']
dfmcomp=dfmcomp[~dfmcomp.Code.isin(dfc.CourseCode)]
print(dfmcomp[['Code','Description']])
</code></pre>
<p>but I feel I should be able to do this in one line instead of three, and without creating an extra dataframe (dfmcomp). I'm trying to learn how to create better/neater code</p>
<p>I tried this to shorten it to two lines, by replacing dfmcomp in the second line with the code that created dfmcomp. I was hoping from there I could eliminate dfmcomp entirely and just print the sliced dfm, but it didn't work:</p>
<pre><code>dfmcomp=dfm[~dfm.loc[dfm[ProgSel]=='C'].Code.isin(dfc.CourseCode)]
print(dfmcomp[['Code','Description']])
</code></pre>
<p>I got this error</p>
<blockquote>
<p>IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).</p>
</blockquote>
<p>And I am stuck where to go next. Can anyone help? Can this be done better?</p>
| 70,443,154
| 2021-12-22T00:48:03.347000
| 3
| null | 2
| 63
|
python|pandas
|
<p>I think you should be able to do this?</p>
<pre><code>print(dfm.loc[dfm[ProgSel].eq('C') & ~dfc.Code.isin(dfc.CourseCode), ['Code','Description']]
</code></pre>
| 2021-12-22T00:50:24.240000
| 2
|
https://pandas.pydata.org/docs/user_guide/10min.html
|
10 minutes to pandas#
10 minutes to pandas#
This is a short introduction to pandas, geared mainly for new users.
You can see more complex recipes in the Cookbook.
Customarily, we import as follows:
In [1]: import numpy as np
In [2]: import pandas as pd
Object creation#
I think you should be able to do this?
print(dfm.loc[dfm[ProgSel].eq('C') & ~dfc.Code.isin(dfc.CourseCode), ['Code','Description']]
See the Intro to data structures section.
Creating a Series by passing a list of values, letting pandas create
a default integer index:
In [3]: s = pd.Series([1, 3, 5, np.nan, 6, 8])
In [4]: s
Out[4]:
0 1.0
1 3.0
2 5.0
3 NaN
4 6.0
5 8.0
dtype: float64
Creating a DataFrame by passing a NumPy array, with a datetime index using date_range()
and labeled columns:
In [5]: dates = pd.date_range("20130101", periods=6)
In [6]: dates
Out[6]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
'2013-01-05', '2013-01-06'],
dtype='datetime64[ns]', freq='D')
In [7]: df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list("ABCD"))
In [8]: df
Out[8]:
A B C D
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
2013-01-06 -0.673690 0.113648 -1.478427 0.524988
Creating a DataFrame by passing a dictionary of objects that can be
converted into a series-like structure:
In [9]: df2 = pd.DataFrame(
...: {
...: "A": 1.0,
...: "B": pd.Timestamp("20130102"),
...: "C": pd.Series(1, index=list(range(4)), dtype="float32"),
...: "D": np.array([3] * 4, dtype="int32"),
...: "E": pd.Categorical(["test", "train", "test", "train"]),
...: "F": "foo",
...: }
...: )
...:
In [10]: df2
Out[10]:
A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
The columns of the resulting DataFrame have different
dtypes:
In [11]: df2.dtypes
Out[11]:
A float64
B datetime64[ns]
C float32
D int32
E category
F object
dtype: object
If you’re using IPython, tab completion for column names (as well as public
attributes) is automatically enabled. Here’s a subset of the attributes that
will be completed:
In [12]: df2.<TAB> # noqa: E225, E999
df2.A df2.bool
df2.abs df2.boxplot
df2.add df2.C
df2.add_prefix df2.clip
df2.add_suffix df2.columns
df2.align df2.copy
df2.all df2.count
df2.any df2.combine
df2.append df2.D
df2.apply df2.describe
df2.applymap df2.diff
df2.B df2.duplicated
As you can see, the columns A, B, C, and D are automatically
tab completed. E and F are there as well; the rest of the attributes have been
truncated for brevity.
Viewing data#
See the Basics section.
Use DataFrame.head() and DataFrame.tail() to view the top and bottom rows of the frame
respectively:
In [13]: df.head()
Out[13]:
A B C D
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
In [14]: df.tail(3)
Out[14]:
A B C D
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
2013-01-06 -0.673690 0.113648 -1.478427 0.524988
Display the DataFrame.index or DataFrame.columns:
In [15]: df.index
Out[15]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
'2013-01-05', '2013-01-06'],
dtype='datetime64[ns]', freq='D')
In [16]: df.columns
Out[16]: Index(['A', 'B', 'C', 'D'], dtype='object')
DataFrame.to_numpy() gives a NumPy representation of the underlying data.
Note that this can be an expensive operation when your DataFrame has
columns with different data types, which comes down to a fundamental difference
between pandas and NumPy: NumPy arrays have one dtype for the entire array,
while pandas DataFrames have one dtype per column. When you call
DataFrame.to_numpy(), pandas will find the NumPy dtype that can hold all
of the dtypes in the DataFrame. This may end up being object, which requires
casting every value to a Python object.
For df, our DataFrame of all floating-point values, and
DataFrame.to_numpy() is fast and doesn’t require copying data:
In [17]: df.to_numpy()
Out[17]:
array([[ 0.4691, -0.2829, -1.5091, -1.1356],
[ 1.2121, -0.1732, 0.1192, -1.0442],
[-0.8618, -2.1046, -0.4949, 1.0718],
[ 0.7216, -0.7068, -1.0396, 0.2719],
[-0.425 , 0.567 , 0.2762, -1.0874],
[-0.6737, 0.1136, -1.4784, 0.525 ]])
For df2, the DataFrame with multiple dtypes,
DataFrame.to_numpy() is relatively expensive:
In [18]: df2.to_numpy()
Out[18]:
array([[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'train', 'foo'],
[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'train', 'foo']],
dtype=object)
Note
DataFrame.to_numpy() does not include the index or column
labels in the output.
describe() shows a quick statistic summary of your data:
In [19]: df.describe()
Out[19]:
A B C D
count 6.000000 6.000000 6.000000 6.000000
mean 0.073711 -0.431125 -0.687758 -0.233103
std 0.843157 0.922818 0.779887 0.973118
min -0.861849 -2.104569 -1.509059 -1.135632
25% -0.611510 -0.600794 -1.368714 -1.076610
50% 0.022070 -0.228039 -0.767252 -0.386188
75% 0.658444 0.041933 -0.034326 0.461706
max 1.212112 0.567020 0.276232 1.071804
Transposing your data:
In [20]: df.T
Out[20]:
2013-01-01 2013-01-02 2013-01-03 2013-01-04 2013-01-05 2013-01-06
A 0.469112 1.212112 -0.861849 0.721555 -0.424972 -0.673690
B -0.282863 -0.173215 -2.104569 -0.706771 0.567020 0.113648
C -1.509059 0.119209 -0.494929 -1.039575 0.276232 -1.478427
D -1.135632 -1.044236 1.071804 0.271860 -1.087401 0.524988
DataFrame.sort_index() sorts by an axis:
In [21]: df.sort_index(axis=1, ascending=False)
Out[21]:
D C B A
2013-01-01 -1.135632 -1.509059 -0.282863 0.469112
2013-01-02 -1.044236 0.119209 -0.173215 1.212112
2013-01-03 1.071804 -0.494929 -2.104569 -0.861849
2013-01-04 0.271860 -1.039575 -0.706771 0.721555
2013-01-05 -1.087401 0.276232 0.567020 -0.424972
2013-01-06 0.524988 -1.478427 0.113648 -0.673690
DataFrame.sort_values() sorts by values:
In [22]: df.sort_values(by="B")
Out[22]:
A B C D
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-06 -0.673690 0.113648 -1.478427 0.524988
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
Selection#
Note
While standard Python / NumPy expressions for selecting and setting are
intuitive and come in handy for interactive work, for production code, we
recommend the optimized pandas data access methods, DataFrame.at(), DataFrame.iat(),
DataFrame.loc() and DataFrame.iloc().
See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing.
Getting#
Selecting a single column, which yields a Series,
equivalent to df.A:
In [23]: df["A"]
Out[23]:
2013-01-01 0.469112
2013-01-02 1.212112
2013-01-03 -0.861849
2013-01-04 0.721555
2013-01-05 -0.424972
2013-01-06 -0.673690
Freq: D, Name: A, dtype: float64
Selecting via [] (__getitem__), which slices the rows:
In [24]: df[0:3]
Out[24]:
A B C D
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
In [25]: df["20130102":"20130104"]
Out[25]:
A B C D
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
Selection by label#
See more in Selection by Label using DataFrame.loc() or DataFrame.at().
For getting a cross section using a label:
In [26]: df.loc[dates[0]]
Out[26]:
A 0.469112
B -0.282863
C -1.509059
D -1.135632
Name: 2013-01-01 00:00:00, dtype: float64
Selecting on a multi-axis by label:
In [27]: df.loc[:, ["A", "B"]]
Out[27]:
A B
2013-01-01 0.469112 -0.282863
2013-01-02 1.212112 -0.173215
2013-01-03 -0.861849 -2.104569
2013-01-04 0.721555 -0.706771
2013-01-05 -0.424972 0.567020
2013-01-06 -0.673690 0.113648
Showing label slicing, both endpoints are included:
In [28]: df.loc["20130102":"20130104", ["A", "B"]]
Out[28]:
A B
2013-01-02 1.212112 -0.173215
2013-01-03 -0.861849 -2.104569
2013-01-04 0.721555 -0.706771
Reduction in the dimensions of the returned object:
In [29]: df.loc["20130102", ["A", "B"]]
Out[29]:
A 1.212112
B -0.173215
Name: 2013-01-02 00:00:00, dtype: float64
For getting a scalar value:
In [30]: df.loc[dates[0], "A"]
Out[30]: 0.4691122999071863
For getting fast access to a scalar (equivalent to the prior method):
In [31]: df.at[dates[0], "A"]
Out[31]: 0.4691122999071863
Selection by position#
See more in Selection by Position using DataFrame.iloc() or DataFrame.at().
Select via the position of the passed integers:
In [32]: df.iloc[3]
Out[32]:
A 0.721555
B -0.706771
C -1.039575
D 0.271860
Name: 2013-01-04 00:00:00, dtype: float64
By integer slices, acting similar to NumPy/Python:
In [33]: df.iloc[3:5, 0:2]
Out[33]:
A B
2013-01-04 0.721555 -0.706771
2013-01-05 -0.424972 0.567020
By lists of integer position locations, similar to the NumPy/Python style:
In [34]: df.iloc[[1, 2, 4], [0, 2]]
Out[34]:
A C
2013-01-02 1.212112 0.119209
2013-01-03 -0.861849 -0.494929
2013-01-05 -0.424972 0.276232
For slicing rows explicitly:
In [35]: df.iloc[1:3, :]
Out[35]:
A B C D
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
For slicing columns explicitly:
In [36]: df.iloc[:, 1:3]
Out[36]:
B C
2013-01-01 -0.282863 -1.509059
2013-01-02 -0.173215 0.119209
2013-01-03 -2.104569 -0.494929
2013-01-04 -0.706771 -1.039575
2013-01-05 0.567020 0.276232
2013-01-06 0.113648 -1.478427
For getting a value explicitly:
In [37]: df.iloc[1, 1]
Out[37]: -0.17321464905330858
For getting fast access to a scalar (equivalent to the prior method):
In [38]: df.iat[1, 1]
Out[38]: -0.17321464905330858
Boolean indexing#
Using a single column’s values to select data:
In [39]: df[df["A"] > 0]
Out[39]:
A B C D
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
Selecting values from a DataFrame where a boolean condition is met:
In [40]: df[df > 0]
Out[40]:
A B C D
2013-01-01 0.469112 NaN NaN NaN
2013-01-02 1.212112 NaN 0.119209 NaN
2013-01-03 NaN NaN NaN 1.071804
2013-01-04 0.721555 NaN NaN 0.271860
2013-01-05 NaN 0.567020 0.276232 NaN
2013-01-06 NaN 0.113648 NaN 0.524988
Using the isin() method for filtering:
In [41]: df2 = df.copy()
In [42]: df2["E"] = ["one", "one", "two", "three", "four", "three"]
In [43]: df2
Out[43]:
A B C D E
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632 one
2013-01-02 1.212112 -0.173215 0.119209 -1.044236 one
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
2013-01-04 0.721555 -0.706771 -1.039575 0.271860 three
2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four
2013-01-06 -0.673690 0.113648 -1.478427 0.524988 three
In [44]: df2[df2["E"].isin(["two", "four"])]
Out[44]:
A B C D E
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four
Setting#
Setting a new column automatically aligns the data
by the indexes:
In [45]: s1 = pd.Series([1, 2, 3, 4, 5, 6], index=pd.date_range("20130102", periods=6))
In [46]: s1
Out[46]:
2013-01-02 1
2013-01-03 2
2013-01-04 3
2013-01-05 4
2013-01-06 5
2013-01-07 6
Freq: D, dtype: int64
In [47]: df["F"] = s1
Setting values by label:
In [48]: df.at[dates[0], "A"] = 0
Setting values by position:
In [49]: df.iat[0, 1] = 0
Setting by assigning with a NumPy array:
In [50]: df.loc[:, "D"] = np.array([5] * len(df))
The result of the prior setting operations:
In [51]: df
Out[51]:
A B C D F
2013-01-01 0.000000 0.000000 -1.509059 5 NaN
2013-01-02 1.212112 -0.173215 0.119209 5 1.0
2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0
2013-01-04 0.721555 -0.706771 -1.039575 5 3.0
2013-01-05 -0.424972 0.567020 0.276232 5 4.0
2013-01-06 -0.673690 0.113648 -1.478427 5 5.0
A where operation with setting:
In [52]: df2 = df.copy()
In [53]: df2[df2 > 0] = -df2
In [54]: df2
Out[54]:
A B C D F
2013-01-01 0.000000 0.000000 -1.509059 -5 NaN
2013-01-02 -1.212112 -0.173215 -0.119209 -5 -1.0
2013-01-03 -0.861849 -2.104569 -0.494929 -5 -2.0
2013-01-04 -0.721555 -0.706771 -1.039575 -5 -3.0
2013-01-05 -0.424972 -0.567020 -0.276232 -5 -4.0
2013-01-06 -0.673690 -0.113648 -1.478427 -5 -5.0
Missing data#
pandas primarily uses the value np.nan to represent missing data. It is by
default not included in computations. See the Missing Data section.
Reindexing allows you to change/add/delete the index on a specified axis. This
returns a copy of the data:
In [55]: df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ["E"])
In [56]: df1.loc[dates[0] : dates[1], "E"] = 1
In [57]: df1
Out[57]:
A B C D F E
2013-01-01 0.000000 0.000000 -1.509059 5 NaN 1.0
2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0 NaN
2013-01-04 0.721555 -0.706771 -1.039575 5 3.0 NaN
DataFrame.dropna() drops any rows that have missing data:
In [58]: df1.dropna(how="any")
Out[58]:
A B C D F E
2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
DataFrame.fillna() fills missing data:
In [59]: df1.fillna(value=5)
Out[59]:
A B C D F E
2013-01-01 0.000000 0.000000 -1.509059 5 5.0 1.0
2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0 5.0
2013-01-04 0.721555 -0.706771 -1.039575 5 3.0 5.0
isna() gets the boolean mask where values are nan:
In [60]: pd.isna(df1)
Out[60]:
A B C D F E
2013-01-01 False False False False True False
2013-01-02 False False False False False False
2013-01-03 False False False False False True
2013-01-04 False False False False False True
Operations#
See the Basic section on Binary Ops.
Stats#
Operations in general exclude missing data.
Performing a descriptive statistic:
In [61]: df.mean()
Out[61]:
A -0.004474
B -0.383981
C -0.687758
D 5.000000
F 3.000000
dtype: float64
Same operation on the other axis:
In [62]: df.mean(1)
Out[62]:
2013-01-01 0.872735
2013-01-02 1.431621
2013-01-03 0.707731
2013-01-04 1.395042
2013-01-05 1.883656
2013-01-06 1.592306
Freq: D, dtype: float64
Operating with objects that have different dimensionality and need alignment.
In addition, pandas automatically broadcasts along the specified dimension:
In [63]: s = pd.Series([1, 3, 5, np.nan, 6, 8], index=dates).shift(2)
In [64]: s
Out[64]:
2013-01-01 NaN
2013-01-02 NaN
2013-01-03 1.0
2013-01-04 3.0
2013-01-05 5.0
2013-01-06 NaN
Freq: D, dtype: float64
In [65]: df.sub(s, axis="index")
Out[65]:
A B C D F
2013-01-01 NaN NaN NaN NaN NaN
2013-01-02 NaN NaN NaN NaN NaN
2013-01-03 -1.861849 -3.104569 -1.494929 4.0 1.0
2013-01-04 -2.278445 -3.706771 -4.039575 2.0 0.0
2013-01-05 -5.424972 -4.432980 -4.723768 0.0 -1.0
2013-01-06 NaN NaN NaN NaN NaN
Apply#
DataFrame.apply() applies a user defined function to the data:
In [66]: df.apply(np.cumsum)
Out[66]:
A B C D F
2013-01-01 0.000000 0.000000 -1.509059 5 NaN
2013-01-02 1.212112 -0.173215 -1.389850 10 1.0
2013-01-03 0.350263 -2.277784 -1.884779 15 3.0
2013-01-04 1.071818 -2.984555 -2.924354 20 6.0
2013-01-05 0.646846 -2.417535 -2.648122 25 10.0
2013-01-06 -0.026844 -2.303886 -4.126549 30 15.0
In [67]: df.apply(lambda x: x.max() - x.min())
Out[67]:
A 2.073961
B 2.671590
C 1.785291
D 0.000000
F 4.000000
dtype: float64
Histogramming#
See more at Histogramming and Discretization.
In [68]: s = pd.Series(np.random.randint(0, 7, size=10))
In [69]: s
Out[69]:
0 4
1 2
2 1
3 2
4 6
5 4
6 4
7 6
8 4
9 4
dtype: int64
In [70]: s.value_counts()
Out[70]:
4 5
2 2
6 2
1 1
dtype: int64
String Methods#
Series is equipped with a set of string processing methods in the str
attribute that make it easy to operate on each element of the array, as in the
code snippet below. Note that pattern-matching in str generally uses regular
expressions by default (and in
some cases always uses them). See more at Vectorized String Methods.
In [71]: s = pd.Series(["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"])
In [72]: s.str.lower()
Out[72]:
0 a
1 b
2 c
3 aaba
4 baca
5 NaN
6 caba
7 dog
8 cat
dtype: object
Merge#
Concat#
pandas provides various facilities for easily combining together Series and
DataFrame objects with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
See the Merging section.
Concatenating pandas objects together along an axis with concat():
In [73]: df = pd.DataFrame(np.random.randn(10, 4))
In [74]: df
Out[74]:
0 1 2 3
0 -0.548702 1.467327 -1.015962 -0.483075
1 1.637550 -1.217659 -0.291519 -1.745505
2 -0.263952 0.991460 -0.919069 0.266046
3 -0.709661 1.669052 1.037882 -1.705775
4 -0.919854 -0.042379 1.247642 -0.009920
5 0.290213 0.495767 0.362949 1.548106
6 -1.131345 -0.089329 0.337863 -0.945867
7 -0.932132 1.956030 0.017587 -0.016692
8 -0.575247 0.254161 -1.143704 0.215897
9 1.193555 -0.077118 -0.408530 -0.862495
# break it into pieces
In [75]: pieces = [df[:3], df[3:7], df[7:]]
In [76]: pd.concat(pieces)
Out[76]:
0 1 2 3
0 -0.548702 1.467327 -1.015962 -0.483075
1 1.637550 -1.217659 -0.291519 -1.745505
2 -0.263952 0.991460 -0.919069 0.266046
3 -0.709661 1.669052 1.037882 -1.705775
4 -0.919854 -0.042379 1.247642 -0.009920
5 0.290213 0.495767 0.362949 1.548106
6 -1.131345 -0.089329 0.337863 -0.945867
7 -0.932132 1.956030 0.017587 -0.016692
8 -0.575247 0.254161 -1.143704 0.215897
9 1.193555 -0.077118 -0.408530 -0.862495
Note
Adding a column to a DataFrame is relatively fast. However, adding
a row requires a copy, and may be expensive. We recommend passing a
pre-built list of records to the DataFrame constructor instead
of building a DataFrame by iteratively appending records to it.
Join#
merge() enables SQL style join types along specific columns. See the Database style joining section.
In [77]: left = pd.DataFrame({"key": ["foo", "foo"], "lval": [1, 2]})
In [78]: right = pd.DataFrame({"key": ["foo", "foo"], "rval": [4, 5]})
In [79]: left
Out[79]:
key lval
0 foo 1
1 foo 2
In [80]: right
Out[80]:
key rval
0 foo 4
1 foo 5
In [81]: pd.merge(left, right, on="key")
Out[81]:
key lval rval
0 foo 1 4
1 foo 1 5
2 foo 2 4
3 foo 2 5
Another example that can be given is:
In [82]: left = pd.DataFrame({"key": ["foo", "bar"], "lval": [1, 2]})
In [83]: right = pd.DataFrame({"key": ["foo", "bar"], "rval": [4, 5]})
In [84]: left
Out[84]:
key lval
0 foo 1
1 bar 2
In [85]: right
Out[85]:
key rval
0 foo 4
1 bar 5
In [86]: pd.merge(left, right, on="key")
Out[86]:
key lval rval
0 foo 1 4
1 bar 2 5
Grouping#
By “group by” we are referring to a process involving one or more of the
following steps:
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
See the Grouping section.
In [87]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [88]: df
Out[88]:
A B C D
0 foo one 1.346061 -1.577585
1 bar one 1.511763 0.396823
2 foo two 1.627081 -0.105381
3 bar three -0.990582 -0.532532
4 foo two -0.441652 1.453749
5 bar two 1.211526 1.208843
6 foo one 0.268520 -0.080952
7 foo three 0.024580 -0.264610
Grouping and then applying the sum() function to the resulting
groups:
In [89]: df.groupby("A")[["C", "D"]].sum()
Out[89]:
C D
A
bar 1.732707 1.073134
foo 2.824590 -0.574779
Grouping by multiple columns forms a hierarchical index, and again we can
apply the sum() function:
In [90]: df.groupby(["A", "B"]).sum()
Out[90]:
C D
A B
bar one 1.511763 0.396823
three -0.990582 -0.532532
two 1.211526 1.208843
foo one 1.614581 -1.658537
three 0.024580 -0.264610
two 1.185429 1.348368
Reshaping#
See the sections on Hierarchical Indexing and
Reshaping.
Stack#
In [91]: tuples = list(
....: zip(
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: )
....: )
....:
In [92]: index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])
In [93]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=["A", "B"])
In [94]: df2 = df[:4]
In [95]: df2
Out[95]:
A B
first second
bar one -0.727965 -0.589346
two 0.339969 -0.693205
baz one -0.339355 0.593616
two 0.884345 1.591431
The stack() method “compresses” a level in the DataFrame’s
columns:
In [96]: stacked = df2.stack()
In [97]: stacked
Out[97]:
first second
bar one A -0.727965
B -0.589346
two A 0.339969
B -0.693205
baz one A -0.339355
B 0.593616
two A 0.884345
B 1.591431
dtype: float64
With a “stacked” DataFrame or Series (having a MultiIndex as the
index), the inverse operation of stack() is
unstack(), which by default unstacks the last level:
In [98]: stacked.unstack()
Out[98]:
A B
first second
bar one -0.727965 -0.589346
two 0.339969 -0.693205
baz one -0.339355 0.593616
two 0.884345 1.591431
In [99]: stacked.unstack(1)
Out[99]:
second one two
first
bar A -0.727965 0.339969
B -0.589346 -0.693205
baz A -0.339355 0.884345
B 0.593616 1.591431
In [100]: stacked.unstack(0)
Out[100]:
first bar baz
second
one A -0.727965 -0.339355
B -0.589346 0.593616
two A 0.339969 0.884345
B -0.693205 1.591431
Pivot tables#
See the section on Pivot Tables.
In [101]: df = pd.DataFrame(
.....: {
.....: "A": ["one", "one", "two", "three"] * 3,
.....: "B": ["A", "B", "C"] * 4,
.....: "C": ["foo", "foo", "foo", "bar", "bar", "bar"] * 2,
.....: "D": np.random.randn(12),
.....: "E": np.random.randn(12),
.....: }
.....: )
.....:
In [102]: df
Out[102]:
A B C D E
0 one A foo -1.202872 0.047609
1 one B foo -1.814470 -0.136473
2 two C foo 1.018601 -0.561757
3 three A bar -0.595447 -1.623033
4 one B bar 1.395433 0.029399
5 one C bar -0.392670 -0.542108
6 two A foo 0.007207 0.282696
7 three B foo 1.928123 -0.087302
8 one C foo -0.055224 -1.575170
9 one A bar 2.395985 1.771208
10 two B bar 1.552825 0.816482
11 three C bar 0.166599 1.100230
pivot_table() pivots a DataFrame specifying the values, index and columns
In [103]: pd.pivot_table(df, values="D", index=["A", "B"], columns=["C"])
Out[103]:
C bar foo
A B
one A 2.395985 -1.202872
B 1.395433 -1.814470
C -0.392670 -0.055224
three A -0.595447 NaN
B NaN 1.928123
C 0.166599 NaN
two A NaN 0.007207
B 1.552825 NaN
C NaN 1.018601
Time series#
pandas has simple, powerful, and efficient functionality for performing
resampling operations during frequency conversion (e.g., converting secondly
data into 5-minutely data). This is extremely common in, but not limited to,
financial applications. See the Time Series section.
In [104]: rng = pd.date_range("1/1/2012", periods=100, freq="S")
In [105]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
In [106]: ts.resample("5Min").sum()
Out[106]:
2012-01-01 24182
Freq: 5T, dtype: int64
Series.tz_localize() localizes a time series to a time zone:
In [107]: rng = pd.date_range("3/6/2012 00:00", periods=5, freq="D")
In [108]: ts = pd.Series(np.random.randn(len(rng)), rng)
In [109]: ts
Out[109]:
2012-03-06 1.857704
2012-03-07 -1.193545
2012-03-08 0.677510
2012-03-09 -0.153931
2012-03-10 0.520091
Freq: D, dtype: float64
In [110]: ts_utc = ts.tz_localize("UTC")
In [111]: ts_utc
Out[111]:
2012-03-06 00:00:00+00:00 1.857704
2012-03-07 00:00:00+00:00 -1.193545
2012-03-08 00:00:00+00:00 0.677510
2012-03-09 00:00:00+00:00 -0.153931
2012-03-10 00:00:00+00:00 0.520091
Freq: D, dtype: float64
Series.tz_convert() converts a timezones aware time series to another time zone:
In [112]: ts_utc.tz_convert("US/Eastern")
Out[112]:
2012-03-05 19:00:00-05:00 1.857704
2012-03-06 19:00:00-05:00 -1.193545
2012-03-07 19:00:00-05:00 0.677510
2012-03-08 19:00:00-05:00 -0.153931
2012-03-09 19:00:00-05:00 0.520091
Freq: D, dtype: float64
Converting between time span representations:
In [113]: rng = pd.date_range("1/1/2012", periods=5, freq="M")
In [114]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [115]: ts
Out[115]:
2012-01-31 -1.475051
2012-02-29 0.722570
2012-03-31 -0.322646
2012-04-30 -1.601631
2012-05-31 0.778033
Freq: M, dtype: float64
In [116]: ps = ts.to_period()
In [117]: ps
Out[117]:
2012-01 -1.475051
2012-02 0.722570
2012-03 -0.322646
2012-04 -1.601631
2012-05 0.778033
Freq: M, dtype: float64
In [118]: ps.to_timestamp()
Out[118]:
2012-01-01 -1.475051
2012-02-01 0.722570
2012-03-01 -0.322646
2012-04-01 -1.601631
2012-05-01 0.778033
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic
functions to be used. In the following example, we convert a quarterly
frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [119]: prng = pd.period_range("1990Q1", "2000Q4", freq="Q-NOV")
In [120]: ts = pd.Series(np.random.randn(len(prng)), prng)
In [121]: ts.index = (prng.asfreq("M", "e") + 1).asfreq("H", "s") + 9
In [122]: ts.head()
Out[122]:
1990-03-01 09:00 -0.289342
1990-06-01 09:00 0.233141
1990-09-01 09:00 -0.223540
1990-12-01 09:00 0.542054
1991-03-01 09:00 -0.688585
Freq: H, dtype: float64
Categoricals#
pandas can include categorical data in a DataFrame. For full docs, see the
categorical introduction and the API documentation.
In [123]: df = pd.DataFrame(
.....: {"id": [1, 2, 3, 4, 5, 6], "raw_grade": ["a", "b", "b", "a", "a", "e"]}
.....: )
.....:
Converting the raw grades to a categorical data type:
In [124]: df["grade"] = df["raw_grade"].astype("category")
In [125]: df["grade"]
Out[125]:
0 a
1 b
2 b
3 a
4 a
5 e
Name: grade, dtype: category
Categories (3, object): ['a', 'b', 'e']
Rename the categories to more meaningful names:
In [126]: new_categories = ["very good", "good", "very bad"]
In [127]: df["grade"] = df["grade"].cat.rename_categories(new_categories)
Reorder the categories and simultaneously add the missing categories (methods under Series.cat() return a new Series by default):
In [128]: df["grade"] = df["grade"].cat.set_categories(
.....: ["very bad", "bad", "medium", "good", "very good"]
.....: )
.....:
In [129]: df["grade"]
Out[129]:
0 very good
1 good
2 good
3 very good
4 very good
5 very bad
Name: grade, dtype: category
Categories (5, object): ['very bad', 'bad', 'medium', 'good', 'very good']
Sorting is per order in the categories, not lexical order:
In [130]: df.sort_values(by="grade")
Out[130]:
id raw_grade grade
5 6 e very bad
1 2 b good
2 3 b good
0 1 a very good
3 4 a very good
4 5 a very good
Grouping by a categorical column also shows empty categories:
In [131]: df.groupby("grade").size()
Out[131]:
grade
very bad 1
bad 0
medium 0
good 2
very good 3
dtype: int64
Plotting#
See the Plotting docs.
We use the standard convention for referencing the matplotlib API:
In [132]: import matplotlib.pyplot as plt
In [133]: plt.close("all")
The plt.close method is used to close a figure window:
In [134]: ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
In [135]: ts = ts.cumsum()
In [136]: ts.plot();
If running under Jupyter Notebook, the plot will appear on plot(). Otherwise use
matplotlib.pyplot.show to show it or
matplotlib.pyplot.savefig to write it to a file.
In [137]: plt.show();
On a DataFrame, the plot() method is a convenience to plot all
of the columns with labels:
In [138]: df = pd.DataFrame(
.....: np.random.randn(1000, 4), index=ts.index, columns=["A", "B", "C", "D"]
.....: )
.....:
In [139]: df = df.cumsum()
In [140]: plt.figure();
In [141]: df.plot();
In [142]: plt.legend(loc='best');
Importing and exporting data#
CSV#
Writing to a csv file: using DataFrame.to_csv()
In [143]: df.to_csv("foo.csv")
Reading from a csv file: using read_csv()
In [144]: pd.read_csv("foo.csv")
Out[144]:
Unnamed: 0 A B C D
0 2000-01-01 0.350262 0.843315 1.798556 0.782234
1 2000-01-02 -0.586873 0.034907 1.923792 -0.562651
2 2000-01-03 -1.245477 -0.963406 2.269575 -1.612566
3 2000-01-04 -0.252830 -0.498066 3.176886 -1.275581
4 2000-01-05 -1.044057 0.118042 2.768571 0.386039
.. ... ... ... ... ...
995 2002-09-22 -48.017654 31.474551 69.146374 -47.541670
996 2002-09-23 -47.207912 32.627390 68.505254 -48.828331
997 2002-09-24 -48.907133 31.990402 67.310924 -49.391051
998 2002-09-25 -50.146062 33.716770 67.717434 -49.037577
999 2002-09-26 -49.724318 33.479952 68.108014 -48.822030
[1000 rows x 5 columns]
HDF5#
Reading and writing to HDFStores.
Writing to a HDF5 Store using DataFrame.to_hdf():
In [145]: df.to_hdf("foo.h5", "df")
Reading from a HDF5 Store using read_hdf():
In [146]: pd.read_hdf("foo.h5", "df")
Out[146]:
A B C D
2000-01-01 0.350262 0.843315 1.798556 0.782234
2000-01-02 -0.586873 0.034907 1.923792 -0.562651
2000-01-03 -1.245477 -0.963406 2.269575 -1.612566
2000-01-04 -0.252830 -0.498066 3.176886 -1.275581
2000-01-05 -1.044057 0.118042 2.768571 0.386039
... ... ... ... ...
2002-09-22 -48.017654 31.474551 69.146374 -47.541670
2002-09-23 -47.207912 32.627390 68.505254 -48.828331
2002-09-24 -48.907133 31.990402 67.310924 -49.391051
2002-09-25 -50.146062 33.716770 67.717434 -49.037577
2002-09-26 -49.724318 33.479952 68.108014 -48.822030
[1000 rows x 4 columns]
Excel#
Reading and writing to Excel.
Writing to an excel file using DataFrame.to_excel():
In [147]: df.to_excel("foo.xlsx", sheet_name="Sheet1")
Reading from an excel file using read_excel():
In [148]: pd.read_excel("foo.xlsx", "Sheet1", index_col=None, na_values=["NA"])
Out[148]:
Unnamed: 0 A B C D
0 2000-01-01 0.350262 0.843315 1.798556 0.782234
1 2000-01-02 -0.586873 0.034907 1.923792 -0.562651
2 2000-01-03 -1.245477 -0.963406 2.269575 -1.612566
3 2000-01-04 -0.252830 -0.498066 3.176886 -1.275581
4 2000-01-05 -1.044057 0.118042 2.768571 0.386039
.. ... ... ... ... ...
995 2002-09-22 -48.017654 31.474551 69.146374 -47.541670
996 2002-09-23 -47.207912 32.627390 68.505254 -48.828331
997 2002-09-24 -48.907133 31.990402 67.310924 -49.391051
998 2002-09-25 -50.146062 33.716770 67.717434 -49.037577
999 2002-09-26 -49.724318 33.479952 68.108014 -48.822030
[1000 rows x 5 columns]
Gotchas#
If you are attempting to perform a boolean operation on a Series or DataFrame
you might see an exception like:
In [149]: if pd.Series([False, True, False]):
.....: print("I was true")
.....:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[149], line 1
----> 1 if pd.Series([False, True, False]):
2 print("I was true")
File ~/work/pandas/pandas/pandas/core/generic.py:1527, in NDFrame.__nonzero__(self)
1525 @final
1526 def __nonzero__(self) -> NoReturn:
-> 1527 raise ValueError(
1528 f"The truth value of a {type(self).__name__} is ambiguous. "
1529 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
1530 )
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
See Comparisons and Gotchas for an explanation and what to do.
| 276
| 408
|
Trying to make pandas python code shorter
I'm fairly new to python and pandas. This code does exactly what I want it to do
dfmcomp=dfm.loc[dfm[ProgSel]=='C']
dfmcomp=dfmcomp[~dfmcomp.Code.isin(dfc.CourseCode)]
print(dfmcomp[['Code','Description']])
but I feel I should be able to do this in one line instead of three, and without creating an extra dataframe (dfmcomp). I'm trying to learn how to create better/neater code
I tried this to shorten it to two lines, by replacing dfmcomp in the second line with the code that created dfmcomp. I was hoping from there I could eliminate dfmcomp entirely and just print the sliced dfm, but it didn't work:
dfmcomp=dfm[~dfm.loc[dfm[ProgSel]=='C'].Code.isin(dfc.CourseCode)]
print(dfmcomp[['Code','Description']])
I got this error
IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).
And I am stuck where to go next. Can anyone help? Can this be done better?
|
68,854,237
|
How to fill the NaN values with the values contains in another column?
|
<p>I want to find if the split column contains anything from the class list. If yes, I want to update the category column using the values from the class list. The desired category is my optimal goal.</p>
<p><a href="https://i.stack.imgur.com/eenfm.png" rel="nofollow noreferrer">sampledata</a></p>
<pre><code>domain split Category Desired Category
[email protected] XYT.com Null XYT
[email protected] XTY.com Null Null
[email protected] ssa.com Null ssa
[email protected] bbc.com Null bbc
[email protected] abk.com Null abk
[email protected] ssb.com Null ssb
Class=['NaN','XYT','ssa','abk','abc','def','asds','ssb','bbc','XY','ab']
for index, row in df.iterrows():
for x in class:
intersection=row.split.contains(x)
if intersection:
df.loc[index,'class'] = intersection
</code></pre>
<p>Just cannot get it right</p>
<p>Please help,
Thanks</p>
| 68,854,391
| 2021-08-19T20:31:21.983000
| 2
| null | 2
| 64
|
python|pandas
|
<p>Use <code>str.extract</code>. Create a regular expression that will match one of the words in the list and extract the word will match (or NaN if none).</p>
<p><strong>Update</strong>: As the '|' operator is never greedy even if it would produce a longer overall match, you have to reverse sort your list manually.</p>
<pre><code>lst = ['NaN','XY','ab','XYT','ssa','abk','abc','def','asds','ssb','bbc']
lst = sorted(lst, reverse=True)
pat = fr"({'|'.join(lst)})"
df['Category'] = df['split'].str.extract(pat)
</code></pre>
<pre><code>>>> df
domain split Category
0 [email protected] XYT.com XYT
1 [email protected] XTY.com NaN
2 [email protected] ssa.com ssa
3 [email protected] bbc.com bbc
4 [email protected] abk.com abk
5 [email protected] ssb.com ssb
>>> lst
['ssb', 'ssa', 'def', 'bbc', 'asds', 'abk', 'abc', 'ab', 'XYT', 'XY', 'NaN']
>>> pat
'(ssb|ssa|def|bbc|asds|abk|abc|ab|XYT|XY|NaN)'
</code></pre>
| 2021-08-19T20:49:23.233000
| 2
|
https://pandas.pydata.org/docs/user_guide/missing_data.html
|
Working with missing data#
Working with missing data#
In this section, we will discuss missing (also referred to as NA) values in
pandas.
Note
The choice of using NaN internally to denote missing data was largely
for simplicity and performance reasons.
Starting from pandas 1.0, some optional data types start experimenting
Use str.extract. Create a regular expression that will match one of the words in the list and extract the word will match (or NaN if none).
Update: As the '|' operator is never greedy even if it would produce a longer overall match, you have to reverse sort your list manually.
lst = ['NaN','XY','ab','XYT','ssa','abk','abc','def','asds','ssb','bbc']
lst = sorted(lst, reverse=True)
pat = fr"({'|'.join(lst)})"
df['Category'] = df['split'].str.extract(pat)
>>> df
domain split Category
0 [email protected] XYT.com XYT
1 [email protected] XTY.com NaN
2 [email protected] ssa.com ssa
3 [email protected] bbc.com bbc
4 [email protected] abk.com abk
5 [email protected] ssb.com ssb
>>> lst
['ssb', 'ssa', 'def', 'bbc', 'asds', 'abk', 'abc', 'ab', 'XYT', 'XY', 'NaN']
>>> pat
'(ssb|ssa|def|bbc|asds|abk|abc|ab|XYT|XY|NaN)'
with a native NA scalar using a mask-based approach. See
here for more.
See the cookbook for some advanced strategies.
Values considered “missing”#
As data comes in many shapes and forms, pandas aims to be flexible with regard
to handling missing data. While NaN is the default missing value marker for
reasons of computational speed and convenience, we need to be able to easily
detect this value with data of different types: floating point, integer,
boolean, and general object. In many cases, however, the Python None will
arise and we wish to also consider that “missing” or “not available” or “NA”.
Note
If you want to consider inf and -inf to be “NA” in computations,
you can set pandas.options.mode.use_inf_as_na = True.
In [1]: df = pd.DataFrame(
...: np.random.randn(5, 3),
...: index=["a", "c", "e", "f", "h"],
...: columns=["one", "two", "three"],
...: )
...:
In [2]: df["four"] = "bar"
In [3]: df["five"] = df["one"] > 0
In [4]: df
Out[4]:
one two three four five
a 0.469112 -0.282863 -1.509059 bar True
c -1.135632 1.212112 -0.173215 bar False
e 0.119209 -1.044236 -0.861849 bar True
f -2.104569 -0.494929 1.071804 bar False
h 0.721555 -0.706771 -1.039575 bar True
In [5]: df2 = df.reindex(["a", "b", "c", "d", "e", "f", "g", "h"])
In [6]: df2
Out[6]:
one two three four five
a 0.469112 -0.282863 -1.509059 bar True
b NaN NaN NaN NaN NaN
c -1.135632 1.212112 -0.173215 bar False
d NaN NaN NaN NaN NaN
e 0.119209 -1.044236 -0.861849 bar True
f -2.104569 -0.494929 1.071804 bar False
g NaN NaN NaN NaN NaN
h 0.721555 -0.706771 -1.039575 bar True
To make detecting missing values easier (and across different array dtypes),
pandas provides the isna() and
notna() functions, which are also methods on
Series and DataFrame objects:
In [7]: df2["one"]
Out[7]:
a 0.469112
b NaN
c -1.135632
d NaN
e 0.119209
f -2.104569
g NaN
h 0.721555
Name: one, dtype: float64
In [8]: pd.isna(df2["one"])
Out[8]:
a False
b True
c False
d True
e False
f False
g True
h False
Name: one, dtype: bool
In [9]: df2["four"].notna()
Out[9]:
a True
b False
c True
d False
e True
f True
g False
h True
Name: four, dtype: bool
In [10]: df2.isna()
Out[10]:
one two three four five
a False False False False False
b True True True True True
c False False False False False
d True True True True True
e False False False False False
f False False False False False
g True True True True True
h False False False False False
Warning
One has to be mindful that in Python (and NumPy), the nan's don’t compare equal, but None's do.
Note that pandas/NumPy uses the fact that np.nan != np.nan, and treats None like np.nan.
In [11]: None == None # noqa: E711
Out[11]: True
In [12]: np.nan == np.nan
Out[12]: False
So as compared to above, a scalar equality comparison versus a None/np.nan doesn’t provide useful information.
In [13]: df2["one"] == np.nan
Out[13]:
a False
b False
c False
d False
e False
f False
g False
h False
Name: one, dtype: bool
Integer dtypes and missing data#
Because NaN is a float, a column of integers with even one missing values
is cast to floating-point dtype (see Support for integer NA for more). pandas
provides a nullable integer array, which can be used by explicitly requesting
the dtype:
In [14]: pd.Series([1, 2, np.nan, 4], dtype=pd.Int64Dtype())
Out[14]:
0 1
1 2
2 <NA>
3 4
dtype: Int64
Alternatively, the string alias dtype='Int64' (note the capital "I") can be
used.
See Nullable integer data type for more.
Datetimes#
For datetime64[ns] types, NaT represents missing values. This is a pseudo-native
sentinel value that can be represented by NumPy in a singular dtype (datetime64[ns]).
pandas objects provide compatibility between NaT and NaN.
In [15]: df2 = df.copy()
In [16]: df2["timestamp"] = pd.Timestamp("20120101")
In [17]: df2
Out[17]:
one two three four five timestamp
a 0.469112 -0.282863 -1.509059 bar True 2012-01-01
c -1.135632 1.212112 -0.173215 bar False 2012-01-01
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h 0.721555 -0.706771 -1.039575 bar True 2012-01-01
In [18]: df2.loc[["a", "c", "h"], ["one", "timestamp"]] = np.nan
In [19]: df2
Out[19]:
one two three four five timestamp
a NaN -0.282863 -1.509059 bar True NaT
c NaN 1.212112 -0.173215 bar False NaT
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h NaN -0.706771 -1.039575 bar True NaT
In [20]: df2.dtypes.value_counts()
Out[20]:
float64 3
object 1
bool 1
datetime64[ns] 1
dtype: int64
Inserting missing data#
You can insert missing values by simply assigning to containers. The
actual missing value used will be chosen based on the dtype.
For example, numeric containers will always use NaN regardless of
the missing value type chosen:
In [21]: s = pd.Series([1, 2, 3])
In [22]: s.loc[0] = None
In [23]: s
Out[23]:
0 NaN
1 2.0
2 3.0
dtype: float64
Likewise, datetime containers will always use NaT.
For object containers, pandas will use the value given:
In [24]: s = pd.Series(["a", "b", "c"])
In [25]: s.loc[0] = None
In [26]: s.loc[1] = np.nan
In [27]: s
Out[27]:
0 None
1 NaN
2 c
dtype: object
Calculations with missing data#
Missing values propagate naturally through arithmetic operations between pandas
objects.
In [28]: a
Out[28]:
one two
a NaN -0.282863
c NaN 1.212112
e 0.119209 -1.044236
f -2.104569 -0.494929
h -2.104569 -0.706771
In [29]: b
Out[29]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [30]: a + b
Out[30]:
one three two
a NaN NaN -0.565727
c NaN NaN 2.424224
e 0.238417 NaN -2.088472
f -4.209138 NaN -0.989859
h NaN NaN -1.413542
The descriptive statistics and computational methods discussed in the
data structure overview (and listed here and here) are all written to
account for missing data. For example:
When summing data, NA (missing) values will be treated as zero.
If the data are all NA, the result will be 0.
Cumulative methods like cumsum() and cumprod() ignore NA values by default, but preserve them in the resulting arrays. To override this behaviour and include NA values, use skipna=False.
In [31]: df
Out[31]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [32]: df["one"].sum()
Out[32]: -1.9853605075978744
In [33]: df.mean(1)
Out[33]:
a -0.895961
c 0.519449
e -0.595625
f -0.509232
h -0.873173
dtype: float64
In [34]: df.cumsum()
Out[34]:
one two three
a NaN -0.282863 -1.509059
c NaN 0.929249 -1.682273
e 0.119209 -0.114987 -2.544122
f -1.985361 -0.609917 -1.472318
h NaN -1.316688 -2.511893
In [35]: df.cumsum(skipna=False)
Out[35]:
one two three
a NaN -0.282863 -1.509059
c NaN 0.929249 -1.682273
e NaN -0.114987 -2.544122
f NaN -0.609917 -1.472318
h NaN -1.316688 -2.511893
Sum/prod of empties/nans#
Warning
This behavior is now standard as of v0.22.0 and is consistent with the default in numpy; previously sum/prod of all-NA or empty Series/DataFrames would return NaN.
See v0.22.0 whatsnew for more.
The sum of an empty or all-NA Series or column of a DataFrame is 0.
In [36]: pd.Series([np.nan]).sum()
Out[36]: 0.0
In [37]: pd.Series([], dtype="float64").sum()
Out[37]: 0.0
The product of an empty or all-NA Series or column of a DataFrame is 1.
In [38]: pd.Series([np.nan]).prod()
Out[38]: 1.0
In [39]: pd.Series([], dtype="float64").prod()
Out[39]: 1.0
NA values in GroupBy#
NA groups in GroupBy are automatically excluded. This behavior is consistent
with R, for example:
In [40]: df
Out[40]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [41]: df.groupby("one").mean()
Out[41]:
two three
one
-2.104569 -0.494929 1.071804
0.119209 -1.044236 -0.861849
See the groupby section here for more information.
Cleaning / filling missing data#
pandas objects are equipped with various data manipulation methods for dealing
with missing data.
Filling missing values: fillna#
fillna() can “fill in” NA values with non-NA data in a couple
of ways, which we illustrate:
Replace NA with a scalar value
In [42]: df2
Out[42]:
one two three four five timestamp
a NaN -0.282863 -1.509059 bar True NaT
c NaN 1.212112 -0.173215 bar False NaT
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h NaN -0.706771 -1.039575 bar True NaT
In [43]: df2.fillna(0)
Out[43]:
one two three four five timestamp
a 0.000000 -0.282863 -1.509059 bar True 0
c 0.000000 1.212112 -0.173215 bar False 0
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01 00:00:00
f -2.104569 -0.494929 1.071804 bar False 2012-01-01 00:00:00
h 0.000000 -0.706771 -1.039575 bar True 0
In [44]: df2["one"].fillna("missing")
Out[44]:
a missing
c missing
e 0.119209
f -2.104569
h missing
Name: one, dtype: object
Fill gaps forward or backward
Using the same filling arguments as reindexing, we
can propagate non-NA values forward or backward:
In [45]: df
Out[45]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [46]: df.fillna(method="pad")
Out[46]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h -2.104569 -0.706771 -1.039575
Limit the amount of filling
If we only want consecutive gaps filled up to a certain number of data points,
we can use the limit keyword:
In [47]: df
Out[47]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN NaN NaN
f NaN NaN NaN
h NaN -0.706771 -1.039575
In [48]: df.fillna(method="pad", limit=1)
Out[48]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN 1.212112 -0.173215
f NaN NaN NaN
h NaN -0.706771 -1.039575
To remind you, these are the available filling methods:
Method
Action
pad / ffill
Fill values forward
bfill / backfill
Fill values backward
With time series data, using pad/ffill is extremely common so that the “last
known value” is available at every time point.
ffill() is equivalent to fillna(method='ffill')
and bfill() is equivalent to fillna(method='bfill')
Filling with a PandasObject#
You can also fillna using a dict or Series that is alignable. The labels of the dict or index of the Series
must match the columns of the frame you wish to fill. The
use case of this is to fill a DataFrame with the mean of that column.
In [49]: dff = pd.DataFrame(np.random.randn(10, 3), columns=list("ABC"))
In [50]: dff.iloc[3:5, 0] = np.nan
In [51]: dff.iloc[4:6, 1] = np.nan
In [52]: dff.iloc[5:8, 2] = np.nan
In [53]: dff
Out[53]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 NaN 0.577046 -1.715002
4 NaN NaN -1.157892
5 -1.344312 NaN NaN
6 -0.109050 1.643563 NaN
7 0.357021 -0.674600 NaN
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
In [54]: dff.fillna(dff.mean())
Out[54]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 -0.140857 0.577046 -1.715002
4 -0.140857 -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
In [55]: dff.fillna(dff.mean()["B":"C"])
Out[55]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 NaN 0.577046 -1.715002
4 NaN -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
Same result as above, but is aligning the ‘fill’ value which is
a Series in this case.
In [56]: dff.where(pd.notna(dff), dff.mean(), axis="columns")
Out[56]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 -0.140857 0.577046 -1.715002
4 -0.140857 -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
Dropping axis labels with missing data: dropna#
You may wish to simply exclude labels from a data set which refer to missing
data. To do this, use dropna():
In [57]: df
Out[57]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN 0.000000 0.000000
f NaN 0.000000 0.000000
h NaN -0.706771 -1.039575
In [58]: df.dropna(axis=0)
Out[58]:
Empty DataFrame
Columns: [one, two, three]
Index: []
In [59]: df.dropna(axis=1)
Out[59]:
two three
a -0.282863 -1.509059
c 1.212112 -0.173215
e 0.000000 0.000000
f 0.000000 0.000000
h -0.706771 -1.039575
In [60]: df["one"].dropna()
Out[60]: Series([], Name: one, dtype: float64)
An equivalent dropna() is available for Series.
DataFrame.dropna has considerably more options than Series.dropna, which can be
examined in the API.
Interpolation#
Both Series and DataFrame objects have interpolate()
that, by default, performs linear interpolation at missing data points.
In [61]: ts
Out[61]:
2000-01-31 0.469112
2000-02-29 NaN
2000-03-31 NaN
2000-04-28 NaN
2000-05-31 NaN
...
2007-12-31 -6.950267
2008-01-31 -7.904475
2008-02-29 -6.441779
2008-03-31 -8.184940
2008-04-30 -9.011531
Freq: BM, Length: 100, dtype: float64
In [62]: ts.count()
Out[62]: 66
In [63]: ts.plot()
Out[63]: <AxesSubplot: >
In [64]: ts.interpolate()
Out[64]:
2000-01-31 0.469112
2000-02-29 0.434469
2000-03-31 0.399826
2000-04-28 0.365184
2000-05-31 0.330541
...
2007-12-31 -6.950267
2008-01-31 -7.904475
2008-02-29 -6.441779
2008-03-31 -8.184940
2008-04-30 -9.011531
Freq: BM, Length: 100, dtype: float64
In [65]: ts.interpolate().count()
Out[65]: 100
In [66]: ts.interpolate().plot()
Out[66]: <AxesSubplot: >
Index aware interpolation is available via the method keyword:
In [67]: ts2
Out[67]:
2000-01-31 0.469112
2000-02-29 NaN
2002-07-31 -5.785037
2005-01-31 NaN
2008-04-30 -9.011531
dtype: float64
In [68]: ts2.interpolate()
Out[68]:
2000-01-31 0.469112
2000-02-29 -2.657962
2002-07-31 -5.785037
2005-01-31 -7.398284
2008-04-30 -9.011531
dtype: float64
In [69]: ts2.interpolate(method="time")
Out[69]:
2000-01-31 0.469112
2000-02-29 0.270241
2002-07-31 -5.785037
2005-01-31 -7.190866
2008-04-30 -9.011531
dtype: float64
For a floating-point index, use method='values':
In [70]: ser
Out[70]:
0.0 0.0
1.0 NaN
10.0 10.0
dtype: float64
In [71]: ser.interpolate()
Out[71]:
0.0 0.0
1.0 5.0
10.0 10.0
dtype: float64
In [72]: ser.interpolate(method="values")
Out[72]:
0.0 0.0
1.0 1.0
10.0 10.0
dtype: float64
You can also interpolate with a DataFrame:
In [73]: df = pd.DataFrame(
....: {
....: "A": [1, 2.1, np.nan, 4.7, 5.6, 6.8],
....: "B": [0.25, np.nan, np.nan, 4, 12.2, 14.4],
....: }
....: )
....:
In [74]: df
Out[74]:
A B
0 1.0 0.25
1 2.1 NaN
2 NaN NaN
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
In [75]: df.interpolate()
Out[75]:
A B
0 1.0 0.25
1 2.1 1.50
2 3.4 2.75
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
The method argument gives access to fancier interpolation methods.
If you have scipy installed, you can pass the name of a 1-d interpolation routine to method.
You’ll want to consult the full scipy interpolation documentation and reference guide for details.
The appropriate interpolation method will depend on the type of data you are working with.
If you are dealing with a time series that is growing at an increasing rate,
method='quadratic' may be appropriate.
If you have values approximating a cumulative distribution function,
then method='pchip' should work well.
To fill missing values with goal of smooth plotting, consider method='akima'.
Warning
These methods require scipy.
In [76]: df.interpolate(method="barycentric")
Out[76]:
A B
0 1.00 0.250
1 2.10 -7.660
2 3.53 -4.515
3 4.70 4.000
4 5.60 12.200
5 6.80 14.400
In [77]: df.interpolate(method="pchip")
Out[77]:
A B
0 1.00000 0.250000
1 2.10000 0.672808
2 3.43454 1.928950
3 4.70000 4.000000
4 5.60000 12.200000
5 6.80000 14.400000
In [78]: df.interpolate(method="akima")
Out[78]:
A B
0 1.000000 0.250000
1 2.100000 -0.873316
2 3.406667 0.320034
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
When interpolating via a polynomial or spline approximation, you must also specify
the degree or order of the approximation:
In [79]: df.interpolate(method="spline", order=2)
Out[79]:
A B
0 1.000000 0.250000
1 2.100000 -0.428598
2 3.404545 1.206900
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
In [80]: df.interpolate(method="polynomial", order=2)
Out[80]:
A B
0 1.000000 0.250000
1 2.100000 -2.703846
2 3.451351 -1.453846
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
Compare several methods:
In [81]: np.random.seed(2)
In [82]: ser = pd.Series(np.arange(1, 10.1, 0.25) ** 2 + np.random.randn(37))
In [83]: missing = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29])
In [84]: ser[missing] = np.nan
In [85]: methods = ["linear", "quadratic", "cubic"]
In [86]: df = pd.DataFrame({m: ser.interpolate(method=m) for m in methods})
In [87]: df.plot()
Out[87]: <AxesSubplot: >
Another use case is interpolation at new values.
Suppose you have 100 observations from some distribution. And let’s suppose
that you’re particularly interested in what’s happening around the middle.
You can mix pandas’ reindex and interpolate methods to interpolate
at the new values.
In [88]: ser = pd.Series(np.sort(np.random.uniform(size=100)))
# interpolate at new_index
In [89]: new_index = ser.index.union(pd.Index([49.25, 49.5, 49.75, 50.25, 50.5, 50.75]))
In [90]: interp_s = ser.reindex(new_index).interpolate(method="pchip")
In [91]: interp_s[49:51]
Out[91]:
49.00 0.471410
49.25 0.476841
49.50 0.481780
49.75 0.485998
50.00 0.489266
50.25 0.491814
50.50 0.493995
50.75 0.495763
51.00 0.497074
dtype: float64
Interpolation limits#
Like other pandas fill methods, interpolate() accepts a limit keyword
argument. Use this argument to limit the number of consecutive NaN values
filled since the last valid observation:
In [92]: ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan, np.nan, 13, np.nan, np.nan])
In [93]: ser
Out[93]:
0 NaN
1 NaN
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive values in a forward direction
In [94]: ser.interpolate()
Out[94]:
0 NaN
1 NaN
2 5.0
3 7.0
4 9.0
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64
# fill one consecutive value in a forward direction
In [95]: ser.interpolate(limit=1)
Out[95]:
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 NaN
6 13.0
7 13.0
8 NaN
dtype: float64
By default, NaN values are filled in a forward direction. Use
limit_direction parameter to fill backward or from both directions.
# fill one consecutive value backwards
In [96]: ser.interpolate(limit=1, limit_direction="backward")
Out[96]:
0 NaN
1 5.0
2 5.0
3 NaN
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64
# fill one consecutive value in both directions
In [97]: ser.interpolate(limit=1, limit_direction="both")
Out[97]:
0 NaN
1 5.0
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
7 13.0
8 NaN
dtype: float64
# fill all consecutive values in both directions
In [98]: ser.interpolate(limit_direction="both")
Out[98]:
0 5.0
1 5.0
2 5.0
3 7.0
4 9.0
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64
By default, NaN values are filled whether they are inside (surrounded by)
existing valid values, or outside existing valid values. The limit_area
parameter restricts filling to either inside or outside values.
# fill one consecutive inside value in both directions
In [99]: ser.interpolate(limit_direction="both", limit_area="inside", limit=1)
Out[99]:
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive outside values backward
In [100]: ser.interpolate(limit_direction="backward", limit_area="outside")
Out[100]:
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive outside values in both directions
In [101]: ser.interpolate(limit_direction="both", limit_area="outside")
Out[101]:
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 13.0
8 13.0
dtype: float64
Replacing generic values#
Often times we want to replace arbitrary values with other values.
replace() in Series and replace() in DataFrame provides an efficient yet
flexible way to perform such replacements.
For a Series, you can replace a single value or a list of values by another
value:
In [102]: ser = pd.Series([0.0, 1.0, 2.0, 3.0, 4.0])
In [103]: ser.replace(0, 5)
Out[103]:
0 5.0
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
You can replace a list of values by a list of other values:
In [104]: ser.replace([0, 1, 2, 3, 4], [4, 3, 2, 1, 0])
Out[104]:
0 4.0
1 3.0
2 2.0
3 1.0
4 0.0
dtype: float64
You can also specify a mapping dict:
In [105]: ser.replace({0: 10, 1: 100})
Out[105]:
0 10.0
1 100.0
2 2.0
3 3.0
4 4.0
dtype: float64
For a DataFrame, you can specify individual values by column:
In [106]: df = pd.DataFrame({"a": [0, 1, 2, 3, 4], "b": [5, 6, 7, 8, 9]})
In [107]: df.replace({"a": 0, "b": 5}, 100)
Out[107]:
a b
0 100 100
1 1 6
2 2 7
3 3 8
4 4 9
Instead of replacing with specified values, you can treat all given values as
missing and interpolate over them:
In [108]: ser.replace([1, 2, 3], method="pad")
Out[108]:
0 0.0
1 0.0
2 0.0
3 0.0
4 4.0
dtype: float64
String/regular expression replacement#
Note
Python strings prefixed with the r character such as r'hello world'
are so-called “raw” strings. They have different semantics regarding
backslashes than strings without this prefix. Backslashes in raw strings
will be interpreted as an escaped backslash, e.g., r'\' == '\\'. You
should read about them
if this is unclear.
Replace the ‘.’ with NaN (str -> str):
In [109]: d = {"a": list(range(4)), "b": list("ab.."), "c": ["a", "b", np.nan, "d"]}
In [110]: df = pd.DataFrame(d)
In [111]: df.replace(".", np.nan)
Out[111]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Now do it with a regular expression that removes surrounding whitespace
(regex -> regex):
In [112]: df.replace(r"\s*\.\s*", np.nan, regex=True)
Out[112]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Replace a few different values (list -> list):
In [113]: df.replace(["a", "."], ["b", np.nan])
Out[113]:
a b c
0 0 b b
1 1 b b
2 2 NaN NaN
3 3 NaN d
list of regex -> list of regex:
In [114]: df.replace([r"\.", r"(a)"], ["dot", r"\1stuff"], regex=True)
Out[114]:
a b c
0 0 astuff astuff
1 1 b b
2 2 dot NaN
3 3 dot d
Only search in column 'b' (dict -> dict):
In [115]: df.replace({"b": "."}, {"b": np.nan})
Out[115]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Same as the previous example, but use a regular expression for
searching instead (dict of regex -> dict):
In [116]: df.replace({"b": r"\s*\.\s*"}, {"b": np.nan}, regex=True)
Out[116]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
You can pass nested dictionaries of regular expressions that use regex=True:
In [117]: df.replace({"b": {"b": r""}}, regex=True)
Out[117]:
a b c
0 0 a a
1 1 b
2 2 . NaN
3 3 . d
Alternatively, you can pass the nested dictionary like so:
In [118]: df.replace(regex={"b": {r"\s*\.\s*": np.nan}})
Out[118]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
You can also use the group of a regular expression match when replacing (dict
of regex -> dict of regex), this works for lists as well.
In [119]: df.replace({"b": r"\s*(\.)\s*"}, {"b": r"\1ty"}, regex=True)
Out[119]:
a b c
0 0 a a
1 1 b b
2 2 .ty NaN
3 3 .ty d
You can pass a list of regular expressions, of which those that match
will be replaced with a scalar (list of regex -> regex).
In [120]: df.replace([r"\s*\.\s*", r"a|b"], np.nan, regex=True)
Out[120]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
All of the regular expression examples can also be passed with the
to_replace argument as the regex argument. In this case the value
argument must be passed explicitly by name or regex must be a nested
dictionary. The previous example, in this case, would then be:
In [121]: df.replace(regex=[r"\s*\.\s*", r"a|b"], value=np.nan)
Out[121]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
This can be convenient if you do not want to pass regex=True every time you
want to use a regular expression.
Note
Anywhere in the above replace examples that you see a regular expression
a compiled regular expression is valid as well.
Numeric replacement#
replace() is similar to fillna().
In [122]: df = pd.DataFrame(np.random.randn(10, 2))
In [123]: df[np.random.rand(df.shape[0]) > 0.5] = 1.5
In [124]: df.replace(1.5, np.nan)
Out[124]:
0 1
0 -0.844214 -1.021415
1 0.432396 -0.323580
2 0.423825 0.799180
3 1.262614 0.751965
4 NaN NaN
5 NaN NaN
6 -0.498174 -1.060799
7 0.591667 -0.183257
8 1.019855 -1.482465
9 NaN NaN
Replacing more than one value is possible by passing a list.
In [125]: df00 = df.iloc[0, 0]
In [126]: df.replace([1.5, df00], [np.nan, "a"])
Out[126]:
0 1
0 a -1.021415
1 0.432396 -0.323580
2 0.423825 0.799180
3 1.262614 0.751965
4 NaN NaN
5 NaN NaN
6 -0.498174 -1.060799
7 0.591667 -0.183257
8 1.019855 -1.482465
9 NaN NaN
In [127]: df[1].dtype
Out[127]: dtype('float64')
You can also operate on the DataFrame in place:
In [128]: df.replace(1.5, np.nan, inplace=True)
Missing data casting rules and indexing#
While pandas supports storing arrays of integer and boolean type, these types
are not capable of storing missing data. Until we can switch to using a native
NA type in NumPy, we’ve established some “casting rules”. When a reindexing
operation introduces missing data, the Series will be cast according to the
rules introduced in the table below.
data type
Cast to
integer
float
boolean
object
float
no cast
object
no cast
For example:
In [129]: s = pd.Series(np.random.randn(5), index=[0, 2, 4, 6, 7])
In [130]: s > 0
Out[130]:
0 True
2 True
4 True
6 True
7 True
dtype: bool
In [131]: (s > 0).dtype
Out[131]: dtype('bool')
In [132]: crit = (s > 0).reindex(list(range(8)))
In [133]: crit
Out[133]:
0 True
1 NaN
2 True
3 NaN
4 True
5 NaN
6 True
7 True
dtype: object
In [134]: crit.dtype
Out[134]: dtype('O')
Ordinarily NumPy will complain if you try to use an object array (even if it
contains boolean values) instead of a boolean array to get or set values from
an ndarray (e.g. selecting values based on some criteria). If a boolean vector
contains NAs, an exception will be generated:
In [135]: reindexed = s.reindex(list(range(8))).fillna(0)
In [136]: reindexed[crit]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[136], line 1
----> 1 reindexed[crit]
File ~/work/pandas/pandas/pandas/core/series.py:1002, in Series.__getitem__(self, key)
999 if is_iterator(key):
1000 key = list(key)
-> 1002 if com.is_bool_indexer(key):
1003 key = check_bool_indexer(self.index, key)
1004 key = np.asarray(key, dtype=bool)
File ~/work/pandas/pandas/pandas/core/common.py:135, in is_bool_indexer(key)
131 na_msg = "Cannot mask with non-boolean array containing NA / NaN values"
132 if lib.infer_dtype(key_array) == "boolean" and isna(key_array).any():
133 # Don't raise on e.g. ["A", "B", np.nan], see
134 # test_loc_getitem_list_of_labels_categoricalindex_with_na
--> 135 raise ValueError(na_msg)
136 return False
137 return True
ValueError: Cannot mask with non-boolean array containing NA / NaN values
However, these can be filled in using fillna() and it will work fine:
In [137]: reindexed[crit.fillna(False)]
Out[137]:
0 0.126504
2 0.696198
4 0.697416
6 0.601516
7 0.003659
dtype: float64
In [138]: reindexed[crit.fillna(True)]
Out[138]:
0 0.126504
1 0.000000
2 0.696198
3 0.000000
4 0.697416
5 0.000000
6 0.601516
7 0.003659
dtype: float64
pandas provides a nullable integer dtype, but you must explicitly request it
when creating the series or column. Notice that we use a capital “I” in
the dtype="Int64".
In [139]: s = pd.Series([0, 1, np.nan, 3, 4], dtype="Int64")
In [140]: s
Out[140]:
0 0
1 1
2 <NA>
3 3
4 4
dtype: Int64
See Nullable integer data type for more.
Experimental NA scalar to denote missing values#
Warning
Experimental: the behaviour of pd.NA can still change without warning.
New in version 1.0.0.
Starting from pandas 1.0, an experimental pd.NA value (singleton) is
available to represent scalar missing values. At this moment, it is used in
the nullable integer, boolean and
dedicated string data types as the missing value indicator.
The goal of pd.NA is provide a “missing” indicator that can be used
consistently across data types (instead of np.nan, None or pd.NaT
depending on the data type).
For example, when having missing values in a Series with the nullable integer
dtype, it will use pd.NA:
In [141]: s = pd.Series([1, 2, None], dtype="Int64")
In [142]: s
Out[142]:
0 1
1 2
2 <NA>
dtype: Int64
In [143]: s[2]
Out[143]: <NA>
In [144]: s[2] is pd.NA
Out[144]: True
Currently, pandas does not yet use those data types by default (when creating
a DataFrame or Series, or when reading in data), so you need to specify
the dtype explicitly. An easy way to convert to those dtypes is explained
here.
Propagation in arithmetic and comparison operations#
In general, missing values propagate in operations involving pd.NA. When
one of the operands is unknown, the outcome of the operation is also unknown.
For example, pd.NA propagates in arithmetic operations, similarly to
np.nan:
In [145]: pd.NA + 1
Out[145]: <NA>
In [146]: "a" * pd.NA
Out[146]: <NA>
There are a few special cases when the result is known, even when one of the
operands is NA.
In [147]: pd.NA ** 0
Out[147]: 1
In [148]: 1 ** pd.NA
Out[148]: 1
In equality and comparison operations, pd.NA also propagates. This deviates
from the behaviour of np.nan, where comparisons with np.nan always
return False.
In [149]: pd.NA == 1
Out[149]: <NA>
In [150]: pd.NA == pd.NA
Out[150]: <NA>
In [151]: pd.NA < 2.5
Out[151]: <NA>
To check if a value is equal to pd.NA, the isna() function can be
used:
In [152]: pd.isna(pd.NA)
Out[152]: True
An exception on this basic propagation rule are reductions (such as the
mean or the minimum), where pandas defaults to skipping missing values. See
above for more.
Logical operations#
For logical operations, pd.NA follows the rules of the
three-valued logic (or
Kleene logic, similarly to R, SQL and Julia). This logic means to only
propagate missing values when it is logically required.
For example, for the logical “or” operation (|), if one of the operands
is True, we already know the result will be True, regardless of the
other value (so regardless the missing value would be True or False).
In this case, pd.NA does not propagate:
In [153]: True | False
Out[153]: True
In [154]: True | pd.NA
Out[154]: True
In [155]: pd.NA | True
Out[155]: True
On the other hand, if one of the operands is False, the result depends
on the value of the other operand. Therefore, in this case pd.NA
propagates:
In [156]: False | True
Out[156]: True
In [157]: False | False
Out[157]: False
In [158]: False | pd.NA
Out[158]: <NA>
The behaviour of the logical “and” operation (&) can be derived using
similar logic (where now pd.NA will not propagate if one of the operands
is already False):
In [159]: False & True
Out[159]: False
In [160]: False & False
Out[160]: False
In [161]: False & pd.NA
Out[161]: False
In [162]: True & True
Out[162]: True
In [163]: True & False
Out[163]: False
In [164]: True & pd.NA
Out[164]: <NA>
NA in a boolean context#
Since the actual value of an NA is unknown, it is ambiguous to convert NA
to a boolean value. The following raises an error:
In [165]: bool(pd.NA)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[165], line 1
----> 1 bool(pd.NA)
File ~/work/pandas/pandas/pandas/_libs/missing.pyx:382, in pandas._libs.missing.NAType.__bool__()
TypeError: boolean value of NA is ambiguous
This also means that pd.NA cannot be used in a context where it is
evaluated to a boolean, such as if condition: ... where condition can
potentially be pd.NA. In such cases, isna() can be used to check
for pd.NA or condition being pd.NA can be avoided, for example by
filling missing values beforehand.
A similar situation occurs when using Series or DataFrame objects in if
statements, see Using if/truth statements with pandas.
NumPy ufuncs#
pandas.NA implements NumPy’s __array_ufunc__ protocol. Most ufuncs
work with NA, and generally return NA:
In [166]: np.log(pd.NA)
Out[166]: <NA>
In [167]: np.add(pd.NA, 1)
Out[167]: <NA>
Warning
Currently, ufuncs involving an ndarray and NA will return an
object-dtype filled with NA values.
In [168]: a = np.array([1, 2, 3])
In [169]: np.greater(a, pd.NA)
Out[169]: array([<NA>, <NA>, <NA>], dtype=object)
The return type here may change to return a different array type
in the future.
See DataFrame interoperability with NumPy functions for more on ufuncs.
Conversion#
If you have a DataFrame or Series using traditional types that have missing data
represented using np.nan, there are convenience methods
convert_dtypes() in Series and convert_dtypes()
in DataFrame that can convert data to use the newer dtypes for integers, strings and
booleans listed here. This is especially helpful after reading
in data sets when letting the readers such as read_csv() and read_excel()
infer default dtypes.
In this example, while the dtypes of all columns are changed, we show the results for
the first 10 columns.
In [170]: bb = pd.read_csv("data/baseball.csv", index_col="id")
In [171]: bb[bb.columns[:10]].dtypes
Out[171]:
player object
year int64
stint int64
team object
lg object
g int64
ab int64
r int64
h int64
X2b int64
dtype: object
In [172]: bbn = bb.convert_dtypes()
In [173]: bbn[bbn.columns[:10]].dtypes
Out[173]:
player string
year Int64
stint Int64
team string
lg string
g Int64
ab Int64
r Int64
h Int64
X2b Int64
dtype: object
| 327
| 1,166
|
How to fill the NaN values with the values contains in another column?
I want to find if the split column contains anything from the class list. If yes, I want to update the category column using the values from the class list. The desired category is my optimal goal.
sampledata
domain split Category Desired Category
[email protected] XYT.com Null XYT
[email protected] XTY.com Null Null
[email protected] ssa.com Null ssa
[email protected] bbc.com Null bbc
[email protected] abk.com Null abk
[email protected] ssb.com Null ssb
Class=['NaN','XYT','ssa','abk','abc','def','asds','ssb','bbc','XY','ab']
for index, row in df.iterrows():
for x in class:
intersection=row.split.contains(x)
if intersection:
df.loc[index,'class'] = intersection
Just cannot get it right
Please help,
Thanks
|
67,337,881
|
Swap columns in dataframe in Pandas doesn't work
|
<pre><code> def columnchange (df_old, a=None, b=None):
x=df_old[a]
df_old[a]=df_old[b]
df_old[b]=x
return df_old
</code></pre>
<p>I am wondering why this column swapping is not working. It makes both the columns of the df_old equal. An explanation for this would be helpful. I am able to swap the columns using column index though. But don't know why this is not working.</p>
| 67,338,004
| 2021-04-30T17:09:25.077000
| 1
| null | 2
| 65
|
python|pandas
|
<p><strong>Alternative I</strong> - as per <a href="https://pandas.pydata.org/docs/user_guide/indexing.html" rel="nofollow noreferrer">documentation</a></p>
<pre><code>df.loc[:, ['b', 'a']] = df[['a', 'b']].to_numpy()
</code></pre>
<p><strong>Alternative II</strong></p>
<p>The reason is that you don't create copies, you create references.</p>
<p>try this:</p>
<pre><code> def columnchange (df_old, a=None, b=None):
x=df_old[a].copy()
df_old[a]=df_old[b].copy()
df_old[b]=x
return df_old
</code></pre>
<p>Further explaination:</p>
<pre><code>x = df_old[a]
</code></pre>
<p>means you could do <code>x = x + 1</code> and the result will be in <code>df_old[a]</code> as well. Because you created a reference, or lets say a "synonym" of the series, not a copy.</p>
| 2021-04-30T17:18:04.030000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.assign.html
|
pandas.DataFrame.assign#
pandas.DataFrame.assign#
DataFrame.assign(**kwargs)[source]#
Assign new columns to a DataFrame.
Returns a new object with all original columns in addition to new ones.
Existing columns that are re-assigned will be overwritten.
Parameters
**kwargsdict of {str: callable or Series}The column names are keywords. If the values are
Alternative I - as per documentation
df.loc[:, ['b', 'a']] = df[['a', 'b']].to_numpy()
Alternative II
The reason is that you don't create copies, you create references.
try this:
def columnchange (df_old, a=None, b=None):
x=df_old[a].copy()
df_old[a]=df_old[b].copy()
df_old[b]=x
return df_old
Further explaination:
x = df_old[a]
means you could do x = x + 1 and the result will be in df_old[a] as well. Because you created a reference, or lets say a "synonym" of the series, not a copy.
callable, they are computed on the DataFrame and
assigned to the new columns. The callable must not
change input DataFrame (though pandas doesn’t check it).
If the values are not callable, (e.g. a Series, scalar, or array),
they are simply assigned.
Returns
DataFrameA new DataFrame with the new columns in addition to
all the existing columns.
Notes
Assigning multiple columns within the same assign is possible.
Later items in ‘**kwargs’ may refer to newly created or modified
columns in ‘df’; items are computed and assigned into ‘df’ in order.
Examples
>>> df = pd.DataFrame({'temp_c': [17.0, 25.0]},
... index=['Portland', 'Berkeley'])
>>> df
temp_c
Portland 17.0
Berkeley 25.0
Where the value is a callable, evaluated on df:
>>> df.assign(temp_f=lambda x: x.temp_c * 9 / 5 + 32)
temp_c temp_f
Portland 17.0 62.6
Berkeley 25.0 77.0
Alternatively, the same behavior can be achieved by directly
referencing an existing Series or sequence:
>>> df.assign(temp_f=df['temp_c'] * 9 / 5 + 32)
temp_c temp_f
Portland 17.0 62.6
Berkeley 25.0 77.0
You can create multiple columns within the same assign where one
of the columns depends on another one defined within the same assign:
>>> df.assign(temp_f=lambda x: x['temp_c'] * 9 / 5 + 32,
... temp_k=lambda x: (x['temp_f'] + 459.67) * 5 / 9)
temp_c temp_f temp_k
Portland 17.0 62.6 290.15
Berkeley 25.0 77.0 298.15
| 359
| 889
|
Swap columns in dataframe in Pandas doesn't work
def columnchange (df_old, a=None, b=None):
x=df_old[a]
df_old[a]=df_old[b]
df_old[b]=x
return df_old
I am wondering why this column swapping is not working. It makes both the columns of the df_old equal. An explanation for this would be helpful. I am able to swap the columns using column index though. But don't know why this is not working.
|
66,283,370
|
Pandas create new dataframe by querying other dataframes without using iterrows
|
<p>I have two huge dataframes that both have the same id field. I want to make a simple summary dataframe where I show the maximum of specific columns. I understand <code>iterrows()</code> is frowned upon, so are a couple one-liners to do this? I don't understand lambda/apply very well, but maybe this would work here.</p>
<p><strong>Stand-alone example</strong></p>
<pre><code>import pandas as pd
myid = [1,1,2,3,4,4,5]
name =['A','A','B','C','D','D','E']
x = [15,12,3,3,1,4,8]
df1 = pd.DataFrame(list(zip(myid, name, x)),
columns=['myid', 'name', 'x'])
display(df1)
myid = [1,2,2,2,3,4,5,5]
name =['A','B','B','B','C','D','E','E']
y = [9,6,3,4,6,2,8,2]
df2 = pd.DataFrame(list(zip(myid, name, y)),
columns=['myid', 'name', 'y'])
display(df2)
mylist = df['myid'].unique()
df_summary = pd.DataFrame(mylist, columns=['MY_ID'])
## do work here...
</code></pre>
<p><a href="https://i.stack.imgur.com/HW7y0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HW7y0.png" alt="enter image description here" /></a></p>
<p><strong>Desired output</strong><br/></p>
<p><a href="https://i.stack.imgur.com/py1ap.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/py1ap.png" alt="enter image description here" /></a></p>
| 66,283,527
| 2021-02-19T18:47:56.443000
| 2
| null | 1
| 70
|
python|pandas
|
<ul>
<li><code>merge()</code></li>
<li>named aggregations</li>
</ul>
<pre><code>df1.merge(df2, on=["myid","name"], how="outer")\
.groupby(["myid","name"], as_index=False).agg(MAX_X=("x","max"),MAX_Y=("y","max"))
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">myid</th>
<th style="text-align: left;">name</th>
<th style="text-align: right;">MAX_X</th>
<th style="text-align: right;">MAX_Y</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">A</td>
<td style="text-align: right;">15</td>
<td style="text-align: right;">9</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">B</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">C</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">4</td>
<td style="text-align: left;">D</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">5</td>
<td style="text-align: left;">E</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">8</td>
</tr>
</tbody>
</table>
</div><h3>updated</h3>
<ul>
<li>you have noted that your data frames are large and solution is giving you OOM</li>
<li>logically aggregate first, then merge will use less memory</li>
</ul>
<pre><code>pd.merge(
df1.groupby(["myid","name"],as_index=False).agg(MAX_X=("x","max")),
df2.groupby(["myid","name"],as_index=False).agg(MAX_Y=("y","max")),
on=["myid","name"]
)
</code></pre>
| 2021-02-19T18:59:39.687000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.compare.html
|
pandas.DataFrame.compare#
pandas.DataFrame.compare#
DataFrame.compare(other, align_axis=1, keep_shape=False, keep_equal=False, result_names=('self', 'other'))[source]#
Compare to another DataFrame and show the differences.
New in version 1.1.0.
Parameters
otherDataFrameObject to compare with.
merge()
named aggregations
df1.merge(df2, on=["myid","name"], how="outer")\
.groupby(["myid","name"], as_index=False).agg(MAX_X=("x","max"),MAX_Y=("y","max"))
myid
name
MAX_X
MAX_Y
0
1
A
15
9
1
2
B
3
6
2
3
C
3
6
3
4
D
4
2
4
5
E
8
8
updated
you have noted that your data frames are large and solution is giving you OOM
logically aggregate first, then merge will use less memory
pd.merge(
df1.groupby(["myid","name"],as_index=False).agg(MAX_X=("x","max")),
df2.groupby(["myid","name"],as_index=False).agg(MAX_Y=("y","max")),
on=["myid","name"]
)
align_axis{0 or ‘index’, 1 or ‘columns’}, default 1Determine which axis to align the comparison on.
0, or ‘index’Resulting differences are stacked verticallywith rows drawn alternately from self and other.
1, or ‘columns’Resulting differences are aligned horizontallywith columns drawn alternately from self and other.
keep_shapebool, default FalseIf true, all rows and columns are kept.
Otherwise, only the ones with different values are kept.
keep_equalbool, default FalseIf true, the result keeps values that are equal.
Otherwise, equal values are shown as NaNs.
result_namestuple, default (‘self’, ‘other’)Set the dataframes names in the comparison.
New in version 1.5.0.
Returns
DataFrameDataFrame that shows the differences stacked side by side.
The resulting index will be a MultiIndex with ‘self’ and ‘other’
stacked alternately at the inner level.
Raises
ValueErrorWhen the two DataFrames don’t have identical labels or shape.
See also
Series.compareCompare with another Series and show differences.
DataFrame.equalsTest whether two objects contain the same elements.
Notes
Matching NaNs will not appear as a difference.
Can only compare identically-labeled
(i.e. same shape, identical row and column labels) DataFrames
Examples
>>> df = pd.DataFrame(
... {
... "col1": ["a", "a", "b", "b", "a"],
... "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
... "col3": [1.0, 2.0, 3.0, 4.0, 5.0]
... },
... columns=["col1", "col2", "col3"],
... )
>>> df
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
>>> df2 = df.copy()
>>> df2.loc[0, 'col1'] = 'c'
>>> df2.loc[2, 'col3'] = 4.0
>>> df2
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
Align the differences on columns
>>> df.compare(df2)
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
Assign result_names
>>> df.compare(df2, result_names=("left", "right"))
col1 col3
left right left right
0 a c NaN NaN
2 NaN NaN 3.0 4.0
Stack the differences on rows
>>> df.compare(df2, align_axis=0)
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
Keep the equal values
>>> df.compare(df2, keep_equal=True)
col1 col3
self other self other
0 a c 1.0 1.0
2 b b 3.0 4.0
Keep all original rows and columns
>>> df.compare(df2, keep_shape=True)
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
Keep all original rows and columns and also all original values
>>> df.compare(df2, keep_shape=True, keep_equal=True)
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 302
| 885
|
Pandas create new dataframe by querying other dataframes without using iterrows
I have two huge dataframes that both have the same id field. I want to make a simple summary dataframe where I show the maximum of specific columns. I understand iterrows() is frowned upon, so are a couple one-liners to do this? I don't understand lambda/apply very well, but maybe this would work here.
Stand-alone example
import pandas as pd
myid = [1,1,2,3,4,4,5]
name =['A','A','B','C','D','D','E']
x = [15,12,3,3,1,4,8]
df1 = pd.DataFrame(list(zip(myid, name, x)),
columns=['myid', 'name', 'x'])
display(df1)
myid = [1,2,2,2,3,4,5,5]
name =['A','B','B','B','C','D','E','E']
y = [9,6,3,4,6,2,8,2]
df2 = pd.DataFrame(list(zip(myid, name, y)),
columns=['myid', 'name', 'y'])
display(df2)
mylist = df['myid'].unique()
df_summary = pd.DataFrame(mylist, columns=['MY_ID'])
## do work here...
Desired output
|
64,309,419
|
Removing list of strings from column in pandas
|
<p>I would need to remove a list of strings:</p>
<pre><code>list_strings=['describe','include','any']
</code></pre>
<p>from a column in pandas:</p>
<pre><code>My_Column
include details about your goal
describe expected and actual results
show some code anywhere
</code></pre>
<p>I tried</p>
<pre><code>df['My_Column']=df['My_Column'].str.replace('|'.join(list_strings), '')
</code></pre>
<p>but it removes parts of words.</p>
<p>For example:</p>
<pre><code>My_Column
details about your goal
expected and actual results
show some code where # here it should be anywhere
</code></pre>
<p>My expected output:</p>
<pre><code>My_Column
details about your goal
expected and actual results
show some code anywhere
</code></pre>
| 64,309,637
| 2020-10-11T21:59:49.033000
| 3
| null | 1
| 587
|
python|pandas
|
<p>Use the "word boundary" expression <code>\b</code> like.</p>
<pre><code>In [46]: df.My_Column.str.replace(r'\b{}\b'.format('|'.join(list_strings)), '')
Out[46]:
0 details about your goal
1 expected and actual results
2 show some code anywhere
Name: My_Column, dtype: object
</code></pre>
| 2020-10-11T22:33:45.457000
| 2
|
https://pandas.pydata.org/docs/user_guide/text.html
|
Working with text data#
Working with text data#
Text data types#
New in version 1.0.0.
There are two ways to store text data in pandas:
object -dtype NumPy array.
StringDtype extension type.
We recommend using StringDtype to store text data.
Prior to pandas 1.0, object dtype was the only option. This was unfortunate
for many reasons:
You can accidentally store a mixture of strings and non-strings in an
object dtype array. It’s better to have a dedicated dtype.
object dtype breaks dtype-specific operations like DataFrame.select_dtypes().
There isn’t a clear way to select just text while excluding non-text
but still object-dtype columns.
When reading code, the contents of an object dtype array is less clear
than 'string'.
Currently, the performance of object dtype arrays of strings and
Use the "word boundary" expression \b like.
In [46]: df.My_Column.str.replace(r'\b{}\b'.format('|'.join(list_strings)), '')
Out[46]:
0 details about your goal
1 expected and actual results
2 show some code anywhere
Name: My_Column, dtype: object
arrays.StringArray are about the same. We expect future enhancements
to significantly increase the performance and lower the memory overhead of
StringArray.
Warning
StringArray is currently considered experimental. The implementation
and parts of the API may change without warning.
For backwards-compatibility, object dtype remains the default type we
infer a list of strings to
In [1]: pd.Series(["a", "b", "c"])
Out[1]:
0 a
1 b
2 c
dtype: object
To explicitly request string dtype, specify the dtype
In [2]: pd.Series(["a", "b", "c"], dtype="string")
Out[2]:
0 a
1 b
2 c
dtype: string
In [3]: pd.Series(["a", "b", "c"], dtype=pd.StringDtype())
Out[3]:
0 a
1 b
2 c
dtype: string
Or astype after the Series or DataFrame is created
In [4]: s = pd.Series(["a", "b", "c"])
In [5]: s
Out[5]:
0 a
1 b
2 c
dtype: object
In [6]: s.astype("string")
Out[6]:
0 a
1 b
2 c
dtype: string
Changed in version 1.1.0.
You can also use StringDtype/"string" as the dtype on non-string data and
it will be converted to string dtype:
In [7]: s = pd.Series(["a", 2, np.nan], dtype="string")
In [8]: s
Out[8]:
0 a
1 2
2 <NA>
dtype: string
In [9]: type(s[1])
Out[9]: str
or convert from existing pandas data:
In [10]: s1 = pd.Series([1, 2, np.nan], dtype="Int64")
In [11]: s1
Out[11]:
0 1
1 2
2 <NA>
dtype: Int64
In [12]: s2 = s1.astype("string")
In [13]: s2
Out[13]:
0 1
1 2
2 <NA>
dtype: string
In [14]: type(s2[0])
Out[14]: str
Behavior differences#
These are places where the behavior of StringDtype objects differ from
object dtype
For StringDtype, string accessor methods
that return numeric output will always return a nullable integer dtype,
rather than either int or float dtype, depending on the presence of NA values.
Methods returning boolean output will return a nullable boolean dtype.
In [15]: s = pd.Series(["a", None, "b"], dtype="string")
In [16]: s
Out[16]:
0 a
1 <NA>
2 b
dtype: string
In [17]: s.str.count("a")
Out[17]:
0 1
1 <NA>
2 0
dtype: Int64
In [18]: s.dropna().str.count("a")
Out[18]:
0 1
2 0
dtype: Int64
Both outputs are Int64 dtype. Compare that with object-dtype
In [19]: s2 = pd.Series(["a", None, "b"], dtype="object")
In [20]: s2.str.count("a")
Out[20]:
0 1.0
1 NaN
2 0.0
dtype: float64
In [21]: s2.dropna().str.count("a")
Out[21]:
0 1
2 0
dtype: int64
When NA values are present, the output dtype is float64. Similarly for
methods returning boolean values.
In [22]: s.str.isdigit()
Out[22]:
0 False
1 <NA>
2 False
dtype: boolean
In [23]: s.str.match("a")
Out[23]:
0 True
1 <NA>
2 False
dtype: boolean
Some string methods, like Series.str.decode() are not available
on StringArray because StringArray only holds strings, not
bytes.
In comparison operations, arrays.StringArray and Series backed
by a StringArray will return an object with BooleanDtype,
rather than a bool dtype object. Missing values in a StringArray
will propagate in comparison operations, rather than always comparing
unequal like numpy.nan.
Everything else that follows in the rest of this document applies equally to
string and object dtype.
String methods#
Series and Index are equipped with a set of string processing methods
that make it easy to operate on each element of the array. Perhaps most
importantly, these methods exclude missing/NA values automatically. These are
accessed via the str attribute and generally have names matching
the equivalent (scalar) built-in string methods:
In [24]: s = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
....: )
....:
In [25]: s.str.lower()
Out[25]:
0 a
1 b
2 c
3 aaba
4 baca
5 <NA>
6 caba
7 dog
8 cat
dtype: string
In [26]: s.str.upper()
Out[26]:
0 A
1 B
2 C
3 AABA
4 BACA
5 <NA>
6 CABA
7 DOG
8 CAT
dtype: string
In [27]: s.str.len()
Out[27]:
0 1
1 1
2 1
3 4
4 4
5 <NA>
6 4
7 3
8 3
dtype: Int64
In [28]: idx = pd.Index([" jack", "jill ", " jesse ", "frank"])
In [29]: idx.str.strip()
Out[29]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
In [30]: idx.str.lstrip()
Out[30]: Index(['jack', 'jill ', 'jesse ', 'frank'], dtype='object')
In [31]: idx.str.rstrip()
Out[31]: Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')
The string methods on Index are especially useful for cleaning up or
transforming DataFrame columns. For instance, you may have columns with
leading or trailing whitespace:
In [32]: df = pd.DataFrame(
....: np.random.randn(3, 2), columns=[" Column A ", " Column B "], index=range(3)
....: )
....:
In [33]: df
Out[33]:
Column A Column B
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Since df.columns is an Index object, we can use the .str accessor
In [34]: df.columns.str.strip()
Out[34]: Index(['Column A', 'Column B'], dtype='object')
In [35]: df.columns.str.lower()
Out[35]: Index([' column a ', ' column b '], dtype='object')
These string methods can then be used to clean up the columns as needed.
Here we are removing leading and trailing whitespaces, lower casing all names,
and replacing any remaining whitespaces with underscores:
In [36]: df.columns = df.columns.str.strip().str.lower().str.replace(" ", "_")
In [37]: df
Out[37]:
column_a column_b
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Note
If you have a Series where lots of elements are repeated
(i.e. the number of unique elements in the Series is a lot smaller than the length of the
Series), it can be faster to convert the original Series to one of type
category and then use .str.<method> or .dt.<property> on that.
The performance difference comes from the fact that, for Series of type category, the
string operations are done on the .categories and not on each element of the
Series.
Please note that a Series of type category with string .categories has
some limitations in comparison to Series of type string (e.g. you can’t add strings to
each other: s + " " + s won’t work if s is a Series of type category). Also,
.str methods which operate on elements of type list are not available on such a
Series.
Warning
Before v.0.25.0, the .str-accessor did only the most rudimentary type checks. Starting with
v.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.
Generally speaking, the .str accessor is intended to work only on strings. With very few
exceptions, other uses are not supported, and may be disabled at a later point.
Splitting and replacing strings#
Methods like split return a Series of lists:
In [38]: s2 = pd.Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype="string")
In [39]: s2.str.split("_")
Out[39]:
0 [a, b, c]
1 [c, d, e]
2 <NA>
3 [f, g, h]
dtype: object
Elements in the split lists can be accessed using get or [] notation:
In [40]: s2.str.split("_").str.get(1)
Out[40]:
0 b
1 d
2 <NA>
3 g
dtype: object
In [41]: s2.str.split("_").str[1]
Out[41]:
0 b
1 d
2 <NA>
3 g
dtype: object
It is easy to expand this to return a DataFrame using expand.
In [42]: s2.str.split("_", expand=True)
Out[42]:
0 1 2
0 a b c
1 c d e
2 <NA> <NA> <NA>
3 f g h
When original Series has StringDtype, the output columns will all
be StringDtype as well.
It is also possible to limit the number of splits:
In [43]: s2.str.split("_", expand=True, n=1)
Out[43]:
0 1
0 a b_c
1 c d_e
2 <NA> <NA>
3 f g_h
rsplit is similar to split except it works in the reverse direction,
i.e., from the end of the string to the beginning of the string:
In [44]: s2.str.rsplit("_", expand=True, n=1)
Out[44]:
0 1
0 a_b c
1 c_d e
2 <NA> <NA>
3 f_g h
replace optionally uses regular expressions:
In [45]: s3 = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", "", np.nan, "CABA", "dog", "cat"],
....: dtype="string",
....: )
....:
In [46]: s3
Out[46]:
0 A
1 B
2 C
3 Aaba
4 Baca
5
6 <NA>
7 CABA
8 dog
9 cat
dtype: string
In [47]: s3.str.replace("^.a|dog", "XX-XX ", case=False, regex=True)
Out[47]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Warning
Some caution must be taken when dealing with regular expressions! The current behavior
is to treat single character patterns as literal strings, even when regex is set
to True. This behavior is deprecated and will be removed in a future version so
that the regex keyword is always respected.
Changed in version 1.2.0.
If you want literal replacement of a string (equivalent to str.replace()), you
can set the optional regex parameter to False, rather than escaping each
character. In this case both pat and repl must be strings:
In [48]: dollars = pd.Series(["12", "-$10", "$10,000"], dtype="string")
# These lines are equivalent
In [49]: dollars.str.replace(r"-\$", "-", regex=True)
Out[49]:
0 12
1 -10
2 $10,000
dtype: string
In [50]: dollars.str.replace("-$", "-", regex=False)
Out[50]:
0 12
1 -10
2 $10,000
dtype: string
The replace method can also take a callable as replacement. It is called
on every pat using re.sub(). The callable should expect one
positional argument (a regex object) and return a string.
# Reverse every lowercase alphabetic word
In [51]: pat = r"[a-z]+"
In [52]: def repl(m):
....: return m.group(0)[::-1]
....:
In [53]: pd.Series(["foo 123", "bar baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[53]:
0 oof 123
1 rab zab
2 <NA>
dtype: string
# Using regex groups
In [54]: pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"
In [55]: def repl(m):
....: return m.group("two").swapcase()
....:
In [56]: pd.Series(["Foo Bar Baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[56]:
0 bAR
1 <NA>
dtype: string
The replace method also accepts a compiled regular expression object
from re.compile() as a pattern. All flags should be included in the
compiled regular expression object.
In [57]: import re
In [58]: regex_pat = re.compile(r"^.a|dog", flags=re.IGNORECASE)
In [59]: s3.str.replace(regex_pat, "XX-XX ", regex=True)
Out[59]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Including a flags argument when calling replace with a compiled
regular expression object will raise a ValueError.
In [60]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE)
---------------------------------------------------------------------------
ValueError: case and flags cannot be set when pat is a compiled regex
removeprefix and removesuffix have the same effect as str.removeprefix and str.removesuffix added in Python 3.9
<https://docs.python.org/3/library/stdtypes.html#str.removeprefix>`__:
New in version 1.4.0.
In [61]: s = pd.Series(["str_foo", "str_bar", "no_prefix"])
In [62]: s.str.removeprefix("str_")
Out[62]:
0 foo
1 bar
2 no_prefix
dtype: object
In [63]: s = pd.Series(["foo_str", "bar_str", "no_suffix"])
In [64]: s.str.removesuffix("_str")
Out[64]:
0 foo
1 bar
2 no_suffix
dtype: object
Concatenation#
There are several ways to concatenate a Series or Index, either with itself or others, all based on cat(),
resp. Index.str.cat.
Concatenating a single Series into a string#
The content of a Series (or Index) can be concatenated:
In [65]: s = pd.Series(["a", "b", "c", "d"], dtype="string")
In [66]: s.str.cat(sep=",")
Out[66]: 'a,b,c,d'
If not specified, the keyword sep for the separator defaults to the empty string, sep='':
In [67]: s.str.cat()
Out[67]: 'abcd'
By default, missing values are ignored. Using na_rep, they can be given a representation:
In [68]: t = pd.Series(["a", "b", np.nan, "d"], dtype="string")
In [69]: t.str.cat(sep=",")
Out[69]: 'a,b,d'
In [70]: t.str.cat(sep=",", na_rep="-")
Out[70]: 'a,b,-,d'
Concatenating a Series and something list-like into a Series#
The first argument to cat() can be a list-like object, provided that it matches the length of the calling Series (or Index).
In [71]: s.str.cat(["A", "B", "C", "D"])
Out[71]:
0 aA
1 bB
2 cC
3 dD
dtype: string
Missing values on either side will result in missing values in the result as well, unless na_rep is specified:
In [72]: s.str.cat(t)
Out[72]:
0 aa
1 bb
2 <NA>
3 dd
dtype: string
In [73]: s.str.cat(t, na_rep="-")
Out[73]:
0 aa
1 bb
2 c-
3 dd
dtype: string
Concatenating a Series and something array-like into a Series#
The parameter others can also be two-dimensional. In this case, the number or rows must match the lengths of the calling Series (or Index).
In [74]: d = pd.concat([t, s], axis=1)
In [75]: s
Out[75]:
0 a
1 b
2 c
3 d
dtype: string
In [76]: d
Out[76]:
0 1
0 a a
1 b b
2 <NA> c
3 d d
In [77]: s.str.cat(d, na_rep="-")
Out[77]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and an indexed object into a Series, with alignment#
For concatenation with a Series or DataFrame, it is possible to align the indexes before concatenation by setting
the join-keyword.
In [78]: u = pd.Series(["b", "d", "a", "c"], index=[1, 3, 0, 2], dtype="string")
In [79]: s
Out[79]:
0 a
1 b
2 c
3 d
dtype: string
In [80]: u
Out[80]:
1 b
3 d
0 a
2 c
dtype: string
In [81]: s.str.cat(u)
Out[81]:
0 aa
1 bb
2 cc
3 dd
dtype: string
In [82]: s.str.cat(u, join="left")
Out[82]:
0 aa
1 bb
2 cc
3 dd
dtype: string
Warning
If the join keyword is not passed, the method cat() will currently fall back to the behavior before version 0.23.0 (i.e. no alignment),
but a FutureWarning will be raised if any of the involved indexes differ, since this default will change to join='left' in a future version.
The usual options are available for join (one of 'left', 'outer', 'inner', 'right').
In particular, alignment also means that the different lengths do not need to coincide anymore.
In [83]: v = pd.Series(["z", "a", "b", "d", "e"], index=[-1, 0, 1, 3, 4], dtype="string")
In [84]: s
Out[84]:
0 a
1 b
2 c
3 d
dtype: string
In [85]: v
Out[85]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [86]: s.str.cat(v, join="left", na_rep="-")
Out[86]:
0 aa
1 bb
2 c-
3 dd
dtype: string
In [87]: s.str.cat(v, join="outer", na_rep="-")
Out[87]:
-1 -z
0 aa
1 bb
2 c-
3 dd
4 -e
dtype: string
The same alignment can be used when others is a DataFrame:
In [88]: f = d.loc[[3, 2, 1, 0], :]
In [89]: s
Out[89]:
0 a
1 b
2 c
3 d
dtype: string
In [90]: f
Out[90]:
0 1
3 d d
2 <NA> c
1 b b
0 a a
In [91]: s.str.cat(f, join="left", na_rep="-")
Out[91]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and many objects into a Series#
Several array-like items (specifically: Series, Index, and 1-dimensional variants of np.ndarray)
can be combined in a list-like container (including iterators, dict-views, etc.).
In [92]: s
Out[92]:
0 a
1 b
2 c
3 d
dtype: string
In [93]: u
Out[93]:
1 b
3 d
0 a
2 c
dtype: string
In [94]: s.str.cat([u, u.to_numpy()], join="left")
Out[94]:
0 aab
1 bbd
2 cca
3 ddc
dtype: string
All elements without an index (e.g. np.ndarray) within the passed list-like must match in length to the calling Series (or Index),
but Series and Index may have arbitrary length (as long as alignment is not disabled with join=None):
In [95]: v
Out[95]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [96]: s.str.cat([v, u, u.to_numpy()], join="outer", na_rep="-")
Out[96]:
-1 -z--
0 aaab
1 bbbd
2 c-ca
3 dddc
4 -e--
dtype: string
If using join='right' on a list-like of others that contains different indexes,
the union of these indexes will be used as the basis for the final concatenation:
In [97]: u.loc[[3]]
Out[97]:
3 d
dtype: string
In [98]: v.loc[[-1, 0]]
Out[98]:
-1 z
0 a
dtype: string
In [99]: s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join="right", na_rep="-")
Out[99]:
3 dd-
-1 --z
0 a-a
dtype: string
Indexing with .str#
You can use [] notation to directly index by position locations. If you index past the end
of the string, the result will be a NaN.
In [100]: s = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [101]: s.str[0]
Out[101]:
0 A
1 B
2 C
3 A
4 B
5 <NA>
6 C
7 d
8 c
dtype: string
In [102]: s.str[1]
Out[102]:
0 <NA>
1 <NA>
2 <NA>
3 a
4 a
5 <NA>
6 A
7 o
8 a
dtype: string
Extracting substrings#
Extract first match in each subject (extract)#
Warning
Before version 0.23, argument expand of the extract method defaulted to
False. When expand=False, expand returns a Series, Index, or
DataFrame, depending on the subject and regular expression
pattern. When expand=True, it always returns a DataFrame,
which is more consistent and less confusing from the perspective of a user.
expand=True has been the default since version 0.23.0.
The extract method accepts a regular expression with at least one
capture group.
Extracting a regular expression with more than one group returns a
DataFrame with one column per group.
In [103]: pd.Series(
.....: ["a1", "b2", "c3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])(\d)", expand=False)
.....:
Out[103]:
0 1
0 a 1
1 b 2
2 <NA> <NA>
Elements that do not match return a row filled with NaN. Thus, a
Series of messy strings can be “converted” into a like-indexed Series
or DataFrame of cleaned-up or more useful strings, without
necessitating get() to access tuples or re.match objects. The
dtype of the result is always object, even if no match is found and
the result only contains NaN.
Named groups like
In [104]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(
.....: r"(?P<letter>[ab])(?P<digit>\d)", expand=False
.....: )
.....:
Out[104]:
letter digit
0 a 1
1 b 2
2 <NA> <NA>
and optional groups like
In [105]: pd.Series(
.....: ["a1", "b2", "3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])?(\d)", expand=False)
.....:
Out[105]:
0 1
0 a 1
1 b 2
2 <NA> 3
can also be used. Note that any capture group names in the regular
expression will be used for column names; otherwise capture group
numbers will be used.
Extracting a regular expression with one group returns a DataFrame
with one column if expand=True.
In [106]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=True)
Out[106]:
0
0 1
1 2
2 <NA>
It returns a Series if expand=False.
In [107]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=False)
Out[107]:
0 1
1 2
2 <NA>
dtype: string
Calling on an Index with a regex with exactly one capture group
returns a DataFrame with one column if expand=True.
In [108]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"], dtype="string")
In [109]: s
Out[109]:
A11 a1
B22 b2
C33 c3
dtype: string
In [110]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=True)
Out[110]:
letter
0 A
1 B
2 C
It returns an Index if expand=False.
In [111]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=False)
Out[111]: Index(['A', 'B', 'C'], dtype='object', name='letter')
Calling on an Index with a regex with more than one capture group
returns a DataFrame if expand=True.
In [112]: s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=True)
Out[112]:
letter 1
0 A 11
1 B 22
2 C 33
It raises ValueError if expand=False.
>>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
ValueError: only one regex group is supported with Index
The table below summarizes the behavior of extract(expand=False)
(input subject in first column, number of groups in regex in
first row)
1 group
>1 group
Index
Index
ValueError
Series
Series
DataFrame
Extract all matches in each subject (extractall)#
Unlike extract (which returns only the first match),
In [113]: s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"], dtype="string")
In [114]: s
Out[114]:
A a1a2
B b1
C c1
dtype: string
In [115]: two_groups = "(?P<letter>[a-z])(?P<digit>[0-9])"
In [116]: s.str.extract(two_groups, expand=True)
Out[116]:
letter digit
A a 1
B b 1
C c 1
the extractall method returns every match. The result of
extractall is always a DataFrame with a MultiIndex on its
rows. The last level of the MultiIndex is named match and
indicates the order in the subject.
In [117]: s.str.extractall(two_groups)
Out[117]:
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 c 1
When each subject string in the Series has exactly one match,
In [118]: s = pd.Series(["a3", "b3", "c2"], dtype="string")
In [119]: s
Out[119]:
0 a3
1 b3
2 c2
dtype: string
then extractall(pat).xs(0, level='match') gives the same result as
extract(pat).
In [120]: extract_result = s.str.extract(two_groups, expand=True)
In [121]: extract_result
Out[121]:
letter digit
0 a 3
1 b 3
2 c 2
In [122]: extractall_result = s.str.extractall(two_groups)
In [123]: extractall_result
Out[123]:
letter digit
match
0 0 a 3
1 0 b 3
2 0 c 2
In [124]: extractall_result.xs(0, level="match")
Out[124]:
letter digit
0 a 3
1 b 3
2 c 2
Index also supports .str.extractall. It returns a DataFrame which has the
same result as a Series.str.extractall with a default index (starts from 0).
In [125]: pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)
Out[125]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
In [126]: pd.Series(["a1a2", "b1", "c1"], dtype="string").str.extractall(two_groups)
Out[126]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
Testing for strings that match or contain a pattern#
You can check whether elements contain a pattern:
In [127]: pattern = r"[0-9][a-z]"
In [128]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.contains(pattern)
.....:
Out[128]:
0 False
1 False
2 True
3 True
4 True
5 True
dtype: boolean
Or whether elements match a pattern:
In [129]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.match(pattern)
.....:
Out[129]:
0 False
1 False
2 True
3 True
4 False
5 True
dtype: boolean
New in version 1.1.0.
In [130]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.fullmatch(pattern)
.....:
Out[130]:
0 False
1 False
2 True
3 True
4 False
5 False
dtype: boolean
Note
The distinction between match, fullmatch, and contains is strictness:
fullmatch tests whether the entire string matches the regular expression;
match tests whether there is a match of the regular expression that begins
at the first character of the string; and contains tests whether there is
a match of the regular expression at any position within the string.
The corresponding functions in the re package for these three match modes are
re.fullmatch,
re.match, and
re.search,
respectively.
Methods like match, fullmatch, contains, startswith, and
endswith take an extra na argument so missing values can be considered
True or False:
In [131]: s4 = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [132]: s4.str.contains("A", na=False)
Out[132]:
0 True
1 False
2 False
3 True
4 False
5 False
6 True
7 False
8 False
dtype: boolean
Creating indicator variables#
You can extract dummy variables from string columns.
For example if they are separated by a '|':
In [133]: s = pd.Series(["a", "a|b", np.nan, "a|c"], dtype="string")
In [134]: s.str.get_dummies(sep="|")
Out[134]:
a b c
0 1 0 0
1 1 1 0
2 0 0 0
3 1 0 1
String Index also supports get_dummies which returns a MultiIndex.
In [135]: idx = pd.Index(["a", "a|b", np.nan, "a|c"])
In [136]: idx.str.get_dummies(sep="|")
Out[136]:
MultiIndex([(1, 0, 0),
(1, 1, 0),
(0, 0, 0),
(1, 0, 1)],
names=['a', 'b', 'c'])
See also get_dummies().
Method summary#
Method
Description
cat()
Concatenate strings
split()
Split strings on delimiter
rsplit()
Split strings on delimiter working from the end of the string
get()
Index into each element (retrieve i-th element)
join()
Join strings in each element of the Series with passed separator
get_dummies()
Split strings on the delimiter returning DataFrame of dummy variables
contains()
Return boolean array if each string contains pattern/regex
replace()
Replace occurrences of pattern/regex/string with some other string or the return value of a callable given the occurrence
removeprefix()
Remove prefix from string, i.e. only remove if string starts with prefix.
removesuffix()
Remove suffix from string, i.e. only remove if string ends with suffix.
repeat()
Duplicate values (s.str.repeat(3) equivalent to x * 3)
pad()
Add whitespace to left, right, or both sides of strings
center()
Equivalent to str.center
ljust()
Equivalent to str.ljust
rjust()
Equivalent to str.rjust
zfill()
Equivalent to str.zfill
wrap()
Split long strings into lines with length less than a given width
slice()
Slice each string in the Series
slice_replace()
Replace slice in each string with passed value
count()
Count occurrences of pattern
startswith()
Equivalent to str.startswith(pat) for each element
endswith()
Equivalent to str.endswith(pat) for each element
findall()
Compute list of all occurrences of pattern/regex for each string
match()
Call re.match on each element, returning matched groups as list
extract()
Call re.search on each element, returning DataFrame with one row for each element and one column for each regex capture group
extractall()
Call re.findall on each element, returning DataFrame with one row for each match and one column for each regex capture group
len()
Compute string lengths
strip()
Equivalent to str.strip
rstrip()
Equivalent to str.rstrip
lstrip()
Equivalent to str.lstrip
partition()
Equivalent to str.partition
rpartition()
Equivalent to str.rpartition
lower()
Equivalent to str.lower
casefold()
Equivalent to str.casefold
upper()
Equivalent to str.upper
find()
Equivalent to str.find
rfind()
Equivalent to str.rfind
index()
Equivalent to str.index
rindex()
Equivalent to str.rindex
capitalize()
Equivalent to str.capitalize
swapcase()
Equivalent to str.swapcase
normalize()
Return Unicode normal form. Equivalent to unicodedata.normalize
translate()
Equivalent to str.translate
isalnum()
Equivalent to str.isalnum
isalpha()
Equivalent to str.isalpha
isdigit()
Equivalent to str.isdigit
isspace()
Equivalent to str.isspace
islower()
Equivalent to str.islower
isupper()
Equivalent to str.isupper
istitle()
Equivalent to str.istitle
isnumeric()
Equivalent to str.isnumeric
isdecimal()
Equivalent to str.isdecimal
| 804
| 1,071
|
Removing list of strings from column in pandas
I would need to remove a list of strings:
list_strings=['describe','include','any']
from a column in pandas:
My_Column
include details about your goal
describe expected and actual results
show some code anywhere
I tried
df['My_Column']=df['My_Column'].str.replace('|'.join(list_strings), '')
but it removes parts of words.
For example:
My_Column
details about your goal
expected and actual results
show some code where # here it should be anywhere
My expected output:
My_Column
details about your goal
expected and actual results
show some code anywhere
|
67,683,542
|
Append all columns from one row into another row
|
<p>I am trying to append every column from one row into another row, I want to do this for every row, but some row will not have any values, take a look at my code it will be more clear:</p>
<p>Here is my data</p>
<pre><code>date day_of_week day_of_month day_of_year month_of_year
5/1/2017 0 1 121 5
5/2/2017 1 2 122 5
5/3/2017 2 3 123 5
5/4/2017 3 4 124 5
5/8/2017 0 8 128 5
5/9/2017 1 9 129 5
5/10/2017 2 10 130 5
5/11/2017 3 11 131 5
5/12/2017 4 12 132 5
5/15/2017 0 15 135 5
5/16/2017 1 16 136 5
5/17/2017 2 17 137 5
5/18/2017 3 18 138 5
5/19/2017 4 19 139 5
5/23/2017 1 23 143 5
5/24/2017 2 24 144 5
5/25/2017 3 25 145 5
5/26/2017 4 26 146 5
</code></pre>
<p>Here is my current code:</p>
<pre><code>s = df_md['date'].shift(-1)
df_md['next_calendarday'] = s.mask(s.dt.dayofweek.diff().lt(0))
df_md.set_index('date', inplace=True)
df_md.apply(lambda row: GetNextDayMarketData(row, df_md), axis=1)
def GetNextDayMarketData(row, dataframe):
if(row['next_calendarday'] is pd.NaT):
return
key = row['next_calendarday'].strftime("%Y-%m-%d")
nextrow = dataframe.loc[key]
for index, val in nextrow.iteritems():
if(index != "next_calendarday"):
dataframe.loc[row.name, index+'_nextday'] = val
</code></pre>
<p>This works but it's so slow it might as well not work. Here is what the result should look like, you can see that the value from the next row has been added to the previous row. The kicker is that it's the next calendar date and not just the next row in the sequence. If a row does not have an entry for next calendar date, it will simply be blank.</p>
<p><a href="https://i.stack.imgur.com/qrXPt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qrXPt.png" alt="enter image description here" /></a></p>
<p>Here is the expected result in csv</p>
<pre><code>date day_of_week day_of_month day_of_year month_of_year next_workingday day_of_week_nextday day_of_month_nextday day_of_year_nextday month_of_year_nextday
5/1/2017 0 1 121 5 5/2/2017 1 2 122 5
5/2/2017 1 2 122 5 5/3/2017 2 3 123 5
5/3/2017 2 3 123 5 5/4/2017 3 4 124 5
5/4/2017 3 4 124 5
5/8/2017 0 8 128 5 5/9/2017 1 9 129 5
5/9/2017 1 9 129 5 5/10/2017 2 10 130 5
5/10/2017 2 10 130 5 5/11/2017 3 11 131 5
5/11/2017 3 11 131 5 5/12/2017 4 12 132 5
5/12/2017 4 12 132 5
5/15/2017 0 15 135 5 5/16/2017 1 16 136 5
5/16/2017 1 16 136 5 5/17/2017 2 17 137 5
5/17/2017 2 17 137 5 5/18/2017 3 18 138 5
5/18/2017 3 18 138 5 5/19/2017 4 19 139 5
5/19/2017 4 19 139 5
5/23/2017 1 23 143 5 5/24/2017 2 24 144 5
5/24/2017 2 24 144 5 5/25/2017 3 25 145 5
5/25/2017 3 25 145 5 5/26/2017 4 26 146 5
5/26/2017 4 26 146 5
5/30/2017 1 30 150 5
</code></pre>
| 67,683,682
| 2021-05-25T07:31:33.240000
| 1
| null | 2
| 83
|
python|pandas
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> with remove column <code>next_calendarday_nextday</code>:</p>
<pre><code>df = df.set_index('date')
df = (df.join(df, on='next_calendarday', rsuffix='_nextday')
.drop('next_calendarday_nextday', axis=1))
</code></pre>
| 2021-05-25T07:42:27.320000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.append.html
|
pandas.DataFrame.append#
pandas.DataFrame.append#
DataFrame.append(other, ignore_index=False, verify_integrity=False, sort=False)[source]#
Append rows of other to the end of caller, returning a new object.
Deprecated since version 1.4.0: Use concat() instead. For further details see
Use DataFrame.join with remove column next_calendarday_nextday:
df = df.set_index('date')
df = (df.join(df, on='next_calendarday', rsuffix='_nextday')
.drop('next_calendarday_nextday', axis=1))
Deprecated DataFrame.append and Series.append
Columns in other that are not in the caller are added as new columns.
Parameters
otherDataFrame or Series/dict-like object, or list of theseThe data to append.
ignore_indexbool, default FalseIf True, the resulting axis will be labeled 0, 1, …, n - 1.
verify_integritybool, default FalseIf True, raise ValueError on creating index with duplicates.
sortbool, default FalseSort columns if the columns of self and other are not aligned.
Changed in version 1.0.0: Changed to not sort by default.
Returns
DataFrameA new DataFrame consisting of the rows of caller and the rows of other.
See also
concatGeneral function to concatenate DataFrame or Series objects.
Notes
If a list of dict/series is passed and the keys are all contained in
the DataFrame’s index, the order of the columns in the resulting
DataFrame will be unchanged.
Iteratively appending rows to a DataFrame can be more computationally
intensive than a single concatenate. A better solution is to append
those rows to a list and then concatenate the list with the original
DataFrame all at once.
Examples
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'), index=['x', 'y'])
>>> df
A B
x 1 2
y 3 4
>>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB'), index=['x', 'y'])
>>> df.append(df2)
A B
x 1 2
y 3 4
x 5 6
y 7 8
With ignore_index set to True:
>>> df.append(df2, ignore_index=True)
A B
0 1 2
1 3 4
2 5 6
3 7 8
The following, while not recommended methods for generating DataFrames,
show two ways to generate a DataFrame from multiple data sources.
Less efficient:
>>> df = pd.DataFrame(columns=['A'])
>>> for i in range(5):
... df = df.append({'A': i}, ignore_index=True)
>>> df
A
0 0
1 1
2 2
3 3
4 4
More efficient:
>>> pd.concat([pd.DataFrame([i], columns=['A']) for i in range(5)],
... ignore_index=True)
A
0 0
1 1
2 2
3 3
4 4
| 289
| 491
|
Append all columns from one row into another row
I am trying to append every column from one row into another row, I want to do this for every row, but some row will not have any values, take a look at my code it will be more clear:
Here is my data
date day_of_week day_of_month day_of_year month_of_year
5/1/2017 0 1 121 5
5/2/2017 1 2 122 5
5/3/2017 2 3 123 5
5/4/2017 3 4 124 5
5/8/2017 0 8 128 5
5/9/2017 1 9 129 5
5/10/2017 2 10 130 5
5/11/2017 3 11 131 5
5/12/2017 4 12 132 5
5/15/2017 0 15 135 5
5/16/2017 1 16 136 5
5/17/2017 2 17 137 5
5/18/2017 3 18 138 5
5/19/2017 4 19 139 5
5/23/2017 1 23 143 5
5/24/2017 2 24 144 5
5/25/2017 3 25 145 5
5/26/2017 4 26 146 5
Here is my current code:
s = df_md['date'].shift(-1)
df_md['next_calendarday'] = s.mask(s.dt.dayofweek.diff().lt(0))
df_md.set_index('date', inplace=True)
df_md.apply(lambda row: GetNextDayMarketData(row, df_md), axis=1)
def GetNextDayMarketData(row, dataframe):
if(row['next_calendarday'] is pd.NaT):
return
key = row['next_calendarday'].strftime("%Y-%m-%d")
nextrow = dataframe.loc[key]
for index, val in nextrow.iteritems():
if(index != "next_calendarday"):
dataframe.loc[row.name, index+'_nextday'] = val
This works but it's so slow it might as well not work. Here is what the result should look like, you can see that the value from the next row has been added to the previous row. The kicker is that it's the next calendar date and not just the next row in the sequence. If a row does not have an entry for next calendar date, it will simply be blank.
Here is the expected result in csv
date day_of_week day_of_month day_of_year month_of_year next_workingday day_of_week_nextday day_of_month_nextday day_of_year_nextday month_of_year_nextday
5/1/2017 0 1 121 5 5/2/2017 1 2 122 5
5/2/2017 1 2 122 5 5/3/2017 2 3 123 5
5/3/2017 2 3 123 5 5/4/2017 3 4 124 5
5/4/2017 3 4 124 5
5/8/2017 0 8 128 5 5/9/2017 1 9 129 5
5/9/2017 1 9 129 5 5/10/2017 2 10 130 5
5/10/2017 2 10 130 5 5/11/2017 3 11 131 5
5/11/2017 3 11 131 5 5/12/2017 4 12 132 5
5/12/2017 4 12 132 5
5/15/2017 0 15 135 5 5/16/2017 1 16 136 5
5/16/2017 1 16 136 5 5/17/2017 2 17 137 5
5/17/2017 2 17 137 5 5/18/2017 3 18 138 5
5/18/2017 3 18 138 5 5/19/2017 4 19 139 5
5/19/2017 4 19 139 5
5/23/2017 1 23 143 5 5/24/2017 2 24 144 5
5/24/2017 2 24 144 5 5/25/2017 3 25 145 5
5/25/2017 3 25 145 5 5/26/2017 4 26 146 5
5/26/2017 4 26 146 5
5/30/2017 1 30 150 5
|
68,301,817
|
Better Pandas way to count frequency of values in different columns together
|
<p>I have a <code>pandas.DataFrame</code> with ZIPCODES in two columns. I just want to count the appearence of all ZIPCODES with <code>value_counts()</code>. But it does not matter for me in which column they are. I need a result for all ZIPCODE columns in the DataFrame together.</p>
<p>This is the inital data with ZIPCODES in the columns:</p>
<pre><code> ZIPCODE_A ZIPCODE_B
0 10000 40000
1 20000 30000
2 20000 20000
3 10000 50000
4 30000 10000
</code></pre>
<p>The final and expected result would be:</p>
<pre><code> ZIPCODE_N
10000 3
20000 3
30000 2
40000 1
50000 1
</code></pre>
<h1>Question</h1>
<p>My solution works but looks complicated. Is there another more elegant <code>pandas</code>-way to solve this?</p>
<h1>MWE</h1>
<pre><code>#!/usr/bin/env python3
import pandas as pd
0000
df = pd.DataFrame({'ZIPCODE_A': [10000, 20000, 20000, 10000, 30000],
'ZIPCODE_B': [40000, 30000, 20000, 50000, 10000]})
print(df)
a = df.ZIPCODE_A.value_counts()
b = df.ZIPCODE_B.value_counts()
a = pd.DataFrame(a)
b = pd.DataFrame(b)
r = a.join(b, how='outer')
r.loc[r.ZIPCODE_A.isna(), 'ZIPCODE_A'] = 0
r.loc[r.ZIPCODE_B.isna(), 'ZIPCODE_B'] = 0
r['ZIPCODE_N'] = r.ZIPCODE_A + r.ZIPCODE_B
r.ZIPCODE_N = r.ZIPCODE_N.astype(int)
del r['ZIPCODE_A']
del r['ZIPCODE_B']
print(r)
</code></pre>
| 68,301,887
| 2021-07-08T12:40:03.337000
| 4
| null | 2
| 84
|
python|pandas
|
<p>First <code>stack</code> the dataframe to have values of all the columns in a single column then call <code>value_counts()</code>, if needed, call <code>to_frame()</code> and pass the new column name i.e. <code>ZIPCODE_N</code></p>
<pre class="lang-py prettyprint-override"><code>>>> df.stack().value_counts().to_frame("ZIPCODE_N")
ZIPCODE_N
10000 3
20000 3
30000 2
50000 1
40000 1
</code></pre>
| 2021-07-08T12:44:04.903000
| 2
|
https://pandas.pydata.org/docs/getting_started/intro_tutorials/09_timeseries.html
|
How to handle time series data with ease?#
In [1]: import pandas as pd
In [2]: import matplotlib.pyplot as plt
Data used for this tutorial:
Air quality data
For this tutorial, air quality data about \(NO_2\) and Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv" data set provides \(NO_2\) values
for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
To raw data
In [3]: air_quality = pd.read_csv("data/air_quality_no2_long.csv")
In [4]: air_quality = air_quality.rename(columns={"date.utc": "datetime"})
First stack the dataframe to have values of all the columns in a single column then call value_counts(), if needed, call to_frame() and pass the new column name i.e. ZIPCODE_N
>>> df.stack().value_counts().to_frame("ZIPCODE_N")
ZIPCODE_N
10000 3
20000 3
30000 2
50000 1
40000 1
In [5]: air_quality.head()
Out[5]:
city country datetime location parameter value unit
0 Paris FR 2019-06-21 00:00:00+00:00 FR04014 no2 20.0 µg/m³
1 Paris FR 2019-06-20 23:00:00+00:00 FR04014 no2 21.8 µg/m³
2 Paris FR 2019-06-20 22:00:00+00:00 FR04014 no2 26.5 µg/m³
3 Paris FR 2019-06-20 21:00:00+00:00 FR04014 no2 24.9 µg/m³
4 Paris FR 2019-06-20 20:00:00+00:00 FR04014 no2 21.4 µg/m³
In [6]: air_quality.city.unique()
Out[6]: array(['Paris', 'Antwerpen', 'London'], dtype=object)
How to handle time series data with ease?#
Using pandas datetime properties#
I want to work with the dates in the column datetime as datetime objects instead of plain text
In [7]: air_quality["datetime"] = pd.to_datetime(air_quality["datetime"])
In [8]: air_quality["datetime"]
Out[8]:
0 2019-06-21 00:00:00+00:00
1 2019-06-20 23:00:00+00:00
2 2019-06-20 22:00:00+00:00
3 2019-06-20 21:00:00+00:00
4 2019-06-20 20:00:00+00:00
...
2063 2019-05-07 06:00:00+00:00
2064 2019-05-07 04:00:00+00:00
2065 2019-05-07 03:00:00+00:00
2066 2019-05-07 02:00:00+00:00
2067 2019-05-07 01:00:00+00:00
Name: datetime, Length: 2068, dtype: datetime64[ns, UTC]
Initially, the values in datetime are character strings and do not
provide any datetime operations (e.g. extract the year, day of the
week,…). By applying the to_datetime function, pandas interprets the
strings and convert these to datetime (i.e. datetime64[ns, UTC])
objects. In pandas we call these datetime objects similar to
datetime.datetime from the standard library as pandas.Timestamp.
Note
As many data sets do contain datetime information in one of
the columns, pandas input function like pandas.read_csv() and pandas.read_json()
can do the transformation to dates when reading the data using the
parse_dates parameter with a list of the columns to read as
Timestamp:
pd.read_csv("../data/air_quality_no2_long.csv", parse_dates=["datetime"])
Why are these pandas.Timestamp objects useful? Let’s illustrate the added
value with some example cases.
What is the start and end date of the time series data set we are working
with?
In [9]: air_quality["datetime"].min(), air_quality["datetime"].max()
Out[9]:
(Timestamp('2019-05-07 01:00:00+0000', tz='UTC'),
Timestamp('2019-06-21 00:00:00+0000', tz='UTC'))
Using pandas.Timestamp for datetimes enables us to calculate with date
information and make them comparable. Hence, we can use this to get the
length of our time series:
In [10]: air_quality["datetime"].max() - air_quality["datetime"].min()
Out[10]: Timedelta('44 days 23:00:00')
The result is a pandas.Timedelta object, similar to datetime.timedelta
from the standard Python library and defining a time duration.
To user guideThe various time concepts supported by pandas are explained in the user guide section on time related concepts.
I want to add a new column to the DataFrame containing only the month of the measurement
In [11]: air_quality["month"] = air_quality["datetime"].dt.month
In [12]: air_quality.head()
Out[12]:
city country datetime ... value unit month
0 Paris FR 2019-06-21 00:00:00+00:00 ... 20.0 µg/m³ 6
1 Paris FR 2019-06-20 23:00:00+00:00 ... 21.8 µg/m³ 6
2 Paris FR 2019-06-20 22:00:00+00:00 ... 26.5 µg/m³ 6
3 Paris FR 2019-06-20 21:00:00+00:00 ... 24.9 µg/m³ 6
4 Paris FR 2019-06-20 20:00:00+00:00 ... 21.4 µg/m³ 6
[5 rows x 8 columns]
By using Timestamp objects for dates, a lot of time-related
properties are provided by pandas. For example the month, but also
year, weekofyear, quarter,… All of these properties are
accessible by the dt accessor.
To user guideAn overview of the existing date properties is given in the
time and date components overview table. More details about the dt accessor
to return datetime like properties are explained in a dedicated section on the dt accessor.
What is the average \(NO_2\) concentration for each day of the week for each of the measurement locations?
In [13]: air_quality.groupby(
....: [air_quality["datetime"].dt.weekday, "location"])["value"].mean()
....:
Out[13]:
datetime location
0 BETR801 27.875000
FR04014 24.856250
London Westminster 23.969697
1 BETR801 22.214286
FR04014 30.999359
...
5 FR04014 25.266154
London Westminster 24.977612
6 BETR801 21.896552
FR04014 23.274306
London Westminster 24.859155
Name: value, Length: 21, dtype: float64
Remember the split-apply-combine pattern provided by groupby from the
tutorial on statistics calculation?
Here, we want to calculate a given statistic (e.g. mean \(NO_2\))
for each weekday and for each measurement location. To group on
weekdays, we use the datetime property weekday (with Monday=0 and
Sunday=6) of pandas Timestamp, which is also accessible by the
dt accessor. The grouping on both locations and weekdays can be done
to split the calculation of the mean on each of these combinations.
Danger
As we are working with a very short time series in these
examples, the analysis does not provide a long-term representative
result!
Plot the typical \(NO_2\) pattern during the day of our time series of all stations together. In other words, what is the average value for each hour of the day?
In [14]: fig, axs = plt.subplots(figsize=(12, 4))
In [15]: air_quality.groupby(air_quality["datetime"].dt.hour)["value"].mean().plot(
....: kind='bar', rot=0, ax=axs
....: )
....:
Out[15]: <AxesSubplot: xlabel='datetime'>
In [16]: plt.xlabel("Hour of the day"); # custom x label using Matplotlib
In [17]: plt.ylabel("$NO_2 (µg/m^3)$");
Similar to the previous case, we want to calculate a given statistic
(e.g. mean \(NO_2\)) for each hour of the day and we can use the
split-apply-combine approach again. For this case, we use the datetime property hour
of pandas Timestamp, which is also accessible by the dt accessor.
Datetime as index#
In the tutorial on reshaping,
pivot() was introduced to reshape the data table with each of the
measurements locations as a separate column:
In [18]: no_2 = air_quality.pivot(index="datetime", columns="location", values="value")
In [19]: no_2.head()
Out[19]:
location BETR801 FR04014 London Westminster
datetime
2019-05-07 01:00:00+00:00 50.5 25.0 23.0
2019-05-07 02:00:00+00:00 45.0 27.7 19.0
2019-05-07 03:00:00+00:00 NaN 50.4 19.0
2019-05-07 04:00:00+00:00 NaN 61.9 16.0
2019-05-07 05:00:00+00:00 NaN 72.4 NaN
Note
By pivoting the data, the datetime information became the
index of the table. In general, setting a column as an index can be
achieved by the set_index function.
Working with a datetime index (i.e. DatetimeIndex) provides powerful
functionalities. For example, we do not need the dt accessor to get
the time series properties, but have these properties available on the
index directly:
In [20]: no_2.index.year, no_2.index.weekday
Out[20]:
(Int64Index([2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019,
...
2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019],
dtype='int64', name='datetime', length=1033),
Int64Index([1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
...
3, 3, 3, 3, 3, 3, 3, 3, 3, 4],
dtype='int64', name='datetime', length=1033))
Some other advantages are the convenient subsetting of time period or
the adapted time scale on plots. Let’s apply this on our data.
Create a plot of the \(NO_2\) values in the different stations from the 20th of May till the end of 21st of May
In [21]: no_2["2019-05-20":"2019-05-21"].plot();
By providing a string that parses to a datetime, a specific subset of the data can be selected on a DatetimeIndex.
To user guideMore information on the DatetimeIndex and the slicing by using strings is provided in the section on time series indexing.
Resample a time series to another frequency#
Aggregate the current hourly time series values to the monthly maximum value in each of the stations.
In [22]: monthly_max = no_2.resample("M").max()
In [23]: monthly_max
Out[23]:
location BETR801 FR04014 London Westminster
datetime
2019-05-31 00:00:00+00:00 74.5 97.0 97.0
2019-06-30 00:00:00+00:00 52.5 84.7 52.0
A very powerful method on time series data with a datetime index, is the
ability to resample() time series to another frequency (e.g.,
converting secondly data into 5-minutely data).
The resample() method is similar to a groupby operation:
it provides a time-based grouping, by using a string (e.g. M,
5H,…) that defines the target frequency
it requires an aggregation function such as mean, max,…
To user guideAn overview of the aliases used to define time series frequencies is given in the offset aliases overview table.
When defined, the frequency of the time series is provided by the
freq attribute:
In [24]: monthly_max.index.freq
Out[24]: <MonthEnd>
Make a plot of the daily mean \(NO_2\) value in each of the stations.
In [25]: no_2.resample("D").mean().plot(style="-o", figsize=(10, 5));
To user guideMore details on the power of time series resampling is provided in the user guide section on resampling.
REMEMBER
Valid date strings can be converted to datetime objects using
to_datetime function or as part of read functions.
Datetime objects in pandas support calculations, logical operations
and convenient date-related properties using the dt accessor.
A DatetimeIndex contains these date-related properties and
supports convenient slicing.
Resample is a powerful method to change the frequency of a time
series.
To user guideA full overview on time series is given on the pages on time series and date functionality.
| 683
| 1,015
|
Better Pandas way to count frequency of values in different columns together
I have a pandas.DataFrame with ZIPCODES in two columns. I just want to count the appearence of all ZIPCODES with value_counts(). But it does not matter for me in which column they are. I need a result for all ZIPCODE columns in the DataFrame together.
This is the inital data with ZIPCODES in the columns:
ZIPCODE_A ZIPCODE_B
0 10000 40000
1 20000 30000
2 20000 20000
3 10000 50000
4 30000 10000
The final and expected result would be:
ZIPCODE_N
10000 3
20000 3
30000 2
40000 1
50000 1
Question
My solution works but looks complicated. Is there another more elegant pandas-way to solve this?
MWE
#!/usr/bin/env python3
import pandas as pd
0000
df = pd.DataFrame({'ZIPCODE_A': [10000, 20000, 20000, 10000, 30000],
'ZIPCODE_B': [40000, 30000, 20000, 50000, 10000]})
print(df)
a = df.ZIPCODE_A.value_counts()
b = df.ZIPCODE_B.value_counts()
a = pd.DataFrame(a)
b = pd.DataFrame(b)
r = a.join(b, how='outer')
r.loc[r.ZIPCODE_A.isna(), 'ZIPCODE_A'] = 0
r.loc[r.ZIPCODE_B.isna(), 'ZIPCODE_B'] = 0
r['ZIPCODE_N'] = r.ZIPCODE_A + r.ZIPCODE_B
r.ZIPCODE_N = r.ZIPCODE_N.astype(int)
del r['ZIPCODE_A']
del r['ZIPCODE_B']
print(r)
|
68,544,803
|
Generate matrix from two columns of a pandas dataframe
|
<p>I have a dataframe as below:</p>
<pre><code>df = pd.DataFrame({'Id': [1,1,2,2,3,3],
'Call_loc': ['SJDR', 'SJDR', 'SJDR', 'TR', 'LD', 'LD'],
'Pres_Resid': ['SJDR', 'SJDR', 'TR', 'TR', 'LD', 'LD']})
df
</code></pre>
<p>I need to generate an array where the columns are the unique values of the <code>Call_loc</code> column and the indices are the unique values of the <code>Id</code> column. The matrix values will be filled by the number of times a value in <code>Call_loc</code> is repeated by <code>Id</code>. The expected output is this:</p>
<pre><code> |SJDR|TR |LD
1 | 2 | 0 | 0
2 | 1 | 1 | 0
3 | 0 | 0 | 2
</code></pre>
<p>I'm having trouble counting reps. Can anyone help?</p>
| 68,544,893
| 2021-07-27T12:25:51.550000
| 1
| null | 0
| 344
|
python|pandas
|
<p>You can use <a href="https://pbpython.com/pandas-crosstab.html" rel="nofollow noreferrer"><code>pd.crosstab</code></a>:</p>
<pre><code>>>> pd.crosstab(df.Id, df.Call_loc)
LD SJDR TR
1 0 2 0
2 0 1 1
3 2 0 0
</code></pre>
<p>If column order is important, you can change it slightly:</p>
<pre><code>cols = ['SJDR','TR','LD']
new = pd.crosstab(df.Id, df.Call_loc)[cols]
</code></pre>
| 2021-07-27T12:32:01.377000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.cov.html
|
pandas.DataFrame.cov#
pandas.DataFrame.cov#
DataFrame.cov(min_periods=None, ddof=1, numeric_only=_NoDefault.no_default)[source]#
Compute pairwise covariance of columns, excluding NA/null values.
Compute the pairwise covariance among the series of a DataFrame.
The returned data frame is the covariance matrix of the columns
of the DataFrame.
Both NA and null values are automatically excluded from the
calculation. (See the note below about bias from missing values.)
A threshold can be set for the minimum number of
observations for each value created. Comparisons with observations
You can use pd.crosstab:
>>> pd.crosstab(df.Id, df.Call_loc)
LD SJDR TR
1 0 2 0
2 0 1 1
3 2 0 0
If column order is important, you can change it slightly:
cols = ['SJDR','TR','LD']
new = pd.crosstab(df.Id, df.Call_loc)[cols]
below this threshold will be returned as NaN.
This method is generally used for the analysis of time series data to
understand the relationship between different measures
across time.
Parameters
min_periodsint, optionalMinimum number of observations required per pair of columns
to have a valid result.
ddofint, default 1Delta degrees of freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
New in version 1.1.0.
numeric_onlybool, default TrueInclude only float, int or boolean data.
New in version 1.5.0.
Deprecated since version 1.5.0: The default value of numeric_only will be False in a future
version of pandas.
Returns
DataFrameThe covariance matrix of the series of the DataFrame.
See also
Series.covCompute covariance with another Series.
core.window.ewm.ExponentialMovingWindow.covExponential weighted sample covariance.
core.window.expanding.Expanding.covExpanding sample covariance.
core.window.rolling.Rolling.covRolling sample covariance.
Notes
Returns the covariance matrix of the DataFrame’s time series.
The covariance is normalized by N-ddof.
For DataFrames that have Series that are missing data (assuming that
data is missing at random)
the returned covariance matrix will be an unbiased estimate
of the variance and covariance between the member Series.
However, for many applications this estimate may not be acceptable
because the estimate covariance matrix is not guaranteed to be positive
semi-definite. This could lead to estimate correlations having
absolute values which are greater than one, and/or a non-invertible
covariance matrix. See Estimation of covariance matrices for more details.
Examples
>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],
... columns=['dogs', 'cats'])
>>> df.cov()
dogs cats
dogs 0.666667 -1.000000
cats -1.000000 1.666667
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(1000, 5),
... columns=['a', 'b', 'c', 'd', 'e'])
>>> df.cov()
a b c d e
a 0.998438 -0.020161 0.059277 -0.008943 0.014144
b -0.020161 1.059352 -0.008543 -0.024738 0.009826
c 0.059277 -0.008543 1.010670 -0.001486 -0.000271
d -0.008943 -0.024738 -0.001486 0.921297 -0.013692
e 0.014144 0.009826 -0.000271 -0.013692 0.977795
Minimum number of periods
This method also supports an optional min_periods keyword
that specifies the required minimum number of non-NA observations for
each column pair in order to have a valid result:
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(20, 3),
... columns=['a', 'b', 'c'])
>>> df.loc[df.index[:5], 'a'] = np.nan
>>> df.loc[df.index[5:10], 'b'] = np.nan
>>> df.cov(min_periods=12)
a b c
a 0.316741 NaN -0.150812
b NaN 1.248003 0.191417
c -0.150812 0.191417 0.895202
| 588
| 859
|
Generate matrix from two columns of a pandas dataframe
I have a dataframe as below:
df = pd.DataFrame({'Id': [1,1,2,2,3,3],
'Call_loc': ['SJDR', 'SJDR', 'SJDR', 'TR', 'LD', 'LD'],
'Pres_Resid': ['SJDR', 'SJDR', 'TR', 'TR', 'LD', 'LD']})
df
I need to generate an array where the columns are the unique values of the Call_loc column and the indices are the unique values of the Id column. The matrix values will be filled by the number of times a value in Call_loc is repeated by Id. The expected output is this:
|SJDR|TR |LD
1 | 2 | 0 | 0
2 | 1 | 1 | 0
3 | 0 | 0 | 2
I'm having trouble counting reps. Can anyone help?
|
62,599,582
|
Cannot append Data with panas.append function
|
<p>following code is not running, I have no guess why.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(columns=['Interval', 'Weight'])
intersection = np.array([1,2,3,4,5])
weight = 0.85
df.append({'Intersection':intersection,'Weight':weight}, ignore_index=True)
print(df)
</code></pre>
<p>Allways I get just following result:</p>
<p>Empty DataFrame</p>
<p>Columns: [Interval, Weight]</p>
| 62,599,779
| 2020-06-26T17:04:52.207000
| 1
| null | 0
| 92
|
python|pandas
|
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df = pd.DataFrame(columns=['Interval', 'Weight'])
intersection = np.array(range(1, 6))
weight = 0.85
# the initial dataframe is empty with two columns
# add or update columns with
df['Intersection'] = intersection
df['Weight'] = weight
# output
Interval Weight Intersection
0 NaN 0.85 1
1 NaN 0.85 2
2 NaN 0.85 3
3 NaN 0.85 4
4 NaN 0.85 5
</code></pre>
<ul>
<li>Using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.append.html" rel="nofollow noreferrer"><code>df.append</code></a>
<ul>
<li><strong>Returns</strong> a new DataFrame that must be assigned to a variable.</li>
</ul>
</li>
<li>This will add the <code>Intersection</code> array as a single row.</li>
<li>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html#pandas-dataframe-explode" rel="nofollow noreferrer"><code>pandas.DataFrame.explode</code></a> to transform each element of a list to a row, replicating index values.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>df = df.append({'Intersection': intersection, 'Weight': weight}, ignore_index=True)
# result
Interval Weight Intersection
0 NaN 0.85 [1, 2, 3, 4, 5]
# if you want separate rows, use .explode
df = df.explode('Intersection')
# result
Interval Weight Intersection
0 NaN 0.85 1
0 NaN 0.85 2
0 NaN 0.85 3
0 NaN 0.85 4
0 NaN 0.85 5
</code></pre>
<ul>
<li>With <code>.append</code>, if you want separate rows, as shown in the docs.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>for i in intersection:
df = df.append({'Intersection': i, 'Weight': weight}, ignore_index=True)
# result
Interval Weight Intersection
0 NaN 0.85 1.0
1 NaN 0.85 2.0
2 NaN 0.85 3.0
3 NaN 0.85 4.0
4 NaN 0.85 5.0
</code></pre>
| 2020-06-26T17:17:16.230000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.ExcelWriter.html
|
import pandas as pd
import numpy as np
df = pd.DataFrame(columns=['Interval', 'Weight'])
intersection = np.array(range(1, 6))
weight = 0.85
# the initial dataframe is empty with two columns
# add or update columns with
df['Intersection'] = intersection
df['Weight'] = weight
# output
Interval Weight Intersection
0 NaN 0.85 1
1 NaN 0.85 2
2 NaN 0.85 3
3 NaN 0.85 4
4 NaN 0.85 5
Using df.append
Returns a new DataFrame that must be assigned to a variable.
This will add the Intersection array as a single row.
Use pandas.DataFrame.explode to transform each element of a list to a row, replicating index values.
df = df.append({'Intersection': intersection, 'Weight': weight}, ignore_index=True)
# result
Interval Weight Intersection
0 NaN 0.85 [1, 2, 3, 4, 5]
# if you want separate rows, use .explode
df = df.explode('Intersection')
# result
Interval Weight Intersection
0 NaN 0.85 1
0 NaN 0.85 2
0 NaN 0.85 3
0 NaN 0.85 4
0 NaN 0.85 5
With .append, if you want separate rows, as shown in the docs.
for i in intersection:
df = df.append({'Intersection': i, 'Weight': weight}, ignore_index=True)
# result
Interval Weight Intersection
0 NaN 0.85 1.0
1 NaN 0.85 2.0
2 NaN 0.85 3.0
3 NaN 0.85 4.0
4 NaN 0.85 5.0
| 0
| 1,557
|
Cannot append Data with panas.append function
following code is not running, I have no guess why.
import pandas as pd
import numpy as np
df = pd.DataFrame(columns=['Interval', 'Weight'])
intersection = np.array([1,2,3,4,5])
weight = 0.85
df.append({'Intersection':intersection,'Weight':weight}, ignore_index=True)
print(df)
Allways I get just following result:
Empty DataFrame
Columns: [Interval, Weight]
|
60,514,666
|
drop rows in pandas dataframe that are not meeting the conditions
|
<p>I have a pandas dataframe and need to clean <code>Status</code> column. My data looks like this:</p>
<pre><code>id Status
123 100%
124 0%
125 1%
126 100%
127 0.25%
</code></pre>
<p>I want to exclude all the rows that are NOT 100% or 0%. The type of the column is <code>object</code></p>
<p>I want my data looks like this:</p>
<pre><code>id Status
123 100%
124 0%
126 100%
</code></pre>
<p>I have tried the following: </p>
<pre><code>df = df.drop(df[(df.Status == '100%') & (df.Status == '0%')].index)
</code></pre>
<p>But that actually doesn't change the dataset at all. </p>
<p>Thank you! </p>
| 60,514,747
| 2020-03-03T20:03:04.610000
| 3
| null | 2
| 866
|
python|pandas
|
<p>You can conditionally select the rows that meet your criteria and set that as the new dataframe value</p>
<pre><code>df = df.loc[(df['Status'] == '100%') | (df['Status'] == '0%')]
</code></pre>
<p>Edit: "|" instead of "&" since both cannot be true at the same time, thus returning 0 results.</p>
| 2020-03-03T20:08:28.540000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.any.html
|
pandas.DataFrame.any#
pandas.DataFrame.any#
DataFrame.any(*, axis=0, bool_only=None, skipna=True, level=None, **kwargs)[source]#
Return whether any element is True, potentially over an axis.
Returns False unless there is at least one element within a series or
along a Dataframe axis that is True or equivalent (e.g. non-zero or
non-empty).
Parameters
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Indicate which axis or axes should be reduced. For Series this parameter
is unused and defaults to 0.
0 / ‘index’ : reduce the index, return a Series whose index is the
original column labels.
1 / ‘columns’ : reduce the columns, return a Series whose index is the
original index.
None : reduce all axes, return a scalar.
bool_onlybool, default NoneInclude only boolean columns. If None, will attempt to use everything,
then use only boolean data. Not implemented for Series.
skipnabool, default TrueExclude NA/null values. If the entire row/column is NA and skipna is
You can conditionally select the rows that meet your criteria and set that as the new dataframe value
df = df.loc[(df['Status'] == '100%') | (df['Status'] == '0%')]
Edit: "|" instead of "&" since both cannot be true at the same time, thus returning 0 results.
True, then the result will be False, as for an empty row/column.
If skipna is False, then NA are treated as True, because these are not
equal to zero.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
**kwargsany, default NoneAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
Series or DataFrameIf level is specified, then, DataFrame is returned; otherwise, Series
is returned.
See also
numpy.anyNumpy version of this method.
Series.anyReturn whether any element is True.
Series.allReturn whether all elements are True.
DataFrame.anyReturn whether any element is True over requested axis.
DataFrame.allReturn whether all elements are True over requested axis.
Examples
Series
For Series input, the output is a scalar indicating whether any element
is True.
>>> pd.Series([False, False]).any()
False
>>> pd.Series([True, False]).any()
True
>>> pd.Series([], dtype="float64").any()
False
>>> pd.Series([np.nan]).any()
False
>>> pd.Series([np.nan]).any(skipna=False)
True
DataFrame
Whether each column contains at least one True element (the default).
>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})
>>> df
A B C
0 1 0 0
1 2 2 0
>>> df.any()
A True
B True
C False
dtype: bool
Aggregating over the columns.
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]})
>>> df
A B
0 True 1
1 False 2
>>> df.any(axis='columns')
0 True
1 True
dtype: bool
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]})
>>> df
A B
0 True 1
1 False 0
>>> df.any(axis='columns')
0 True
1 False
dtype: bool
Aggregating over the entire DataFrame with axis=None.
>>> df.any(axis=None)
True
any for an empty DataFrame is an empty Series.
>>> pd.DataFrame([]).any()
Series([], dtype: bool)
| 981
| 1,243
|
drop rows in pandas dataframe that are not meeting the conditions
I have a pandas dataframe and need to clean Status column. My data looks like this:
id Status
123 100%
124 0%
125 1%
126 100%
127 0.25%
I want to exclude all the rows that are NOT 100% or 0%. The type of the column is object
I want my data looks like this:
id Status
123 100%
124 0%
126 100%
I have tried the following:
df = df.drop(df[(df.Status == '100%') & (df.Status == '0%')].index)
But that actually doesn't change the dataset at all.
Thank you!
|
65,098,419
|
Remove column based on header name type
|
<p>usually when you want to remove columns which are not of type <code>float</code>, you can write <code>pd.DataFrame.select_dtypes(include='float64')</code>. however i would like to remove a column in cases where the <strong>header name</strong> is not a float</p>
<pre><code>df = pd.DataFrame({'a' : [1,2,3], 10 : [3,4,5]})
df.dtypes
</code></pre>
<p>will give the output</p>
<pre><code>a int64
10 int64
dtype: object
</code></pre>
<p>how can i remove the column <code>a</code> based on the fact that it's not a <code>float</code> or <code>int</code>?</p>
| 65,098,487
| 2020-12-01T20:58:15.247000
| 4
| null | 0
| 100
|
python|pandas
|
<p>Please Try drop column with digit using regex if you wanted to drop <code>10</code></p>
<pre><code>df.filter(regex='\d', axis=1)
#On the contrary, you can drop nondigits too
# df.filter(regex='\D', axis=1)
</code></pre>
| 2020-12-01T21:03:11.370000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rename.html
|
pandas.DataFrame.rename#
pandas.DataFrame.rename#
DataFrame.rename(mapper=None, *, index=None, columns=None, axis=None, copy=None, inplace=False, level=None, errors='ignore')[source]#
Alter axes labels.
Function / dict values must be unique (1-to-1). Labels not contained in
a dict / Series will be left as-is. Extra labels listed don’t throw an
error.
See the user guide for more.
Parameters
mapperdict-like or functionDict-like or function transformations to apply to
that axis’ values. Use either mapper and axis to
Please Try drop column with digit using regex if you wanted to drop 10
df.filter(regex='\d', axis=1)
#On the contrary, you can drop nondigits too
# df.filter(regex='\D', axis=1)
specify the axis to target with mapper, or index and
columns.
indexdict-like or functionAlternative to specifying axis (mapper, axis=0
is equivalent to index=mapper).
columnsdict-like or functionAlternative to specifying axis (mapper, axis=1
is equivalent to columns=mapper).
axis{0 or ‘index’, 1 or ‘columns’}, default 0Axis to target with mapper. Can be either the axis name
(‘index’, ‘columns’) or number (0, 1). The default is ‘index’.
copybool, default TrueAlso copy underlying data.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
If True then value of copy is ignored.
levelint or level name, default NoneIn case of a MultiIndex, only rename labels in the specified
level.
errors{‘ignore’, ‘raise’}, default ‘ignore’If ‘raise’, raise a KeyError when a dict-like mapper, index,
or columns contains labels that are not present in the Index
being transformed.
If ‘ignore’, existing keys will be renamed and extra keys will be
ignored.
Returns
DataFrame or NoneDataFrame with the renamed axis labels or None if inplace=True.
Raises
KeyErrorIf any of the labels is not found in the selected axis and
“errors=’raise’”.
See also
DataFrame.rename_axisSet the name of the axis.
Examples
DataFrame.rename supports two calling conventions
(index=index_mapper, columns=columns_mapper, ...)
(mapper, axis={'index', 'columns'}, ...)
We highly recommend using keyword arguments to clarify your
intent.
Rename columns using a mapping:
>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
>>> df.rename(columns={"A": "a", "B": "c"})
a c
0 1 4
1 2 5
2 3 6
Rename index using a mapping:
>>> df.rename(index={0: "x", 1: "y", 2: "z"})
A B
x 1 4
y 2 5
z 3 6
Cast index labels to a different type:
>>> df.index
RangeIndex(start=0, stop=3, step=1)
>>> df.rename(index=str).index
Index(['0', '1', '2'], dtype='object')
>>> df.rename(columns={"A": "a", "B": "b", "C": "c"}, errors="raise")
Traceback (most recent call last):
KeyError: ['C'] not found in axis
Using axis-style parameters:
>>> df.rename(str.lower, axis='columns')
a b
0 1 4
1 2 5
2 3 6
>>> df.rename({1: 2, 2: 4}, axis='index')
A B
0 1 4
2 2 5
4 3 6
| 525
| 708
|
Remove column based on header name type
usually when you want to remove columns which are not of type float, you can write pd.DataFrame.select_dtypes(include='float64'). however i would like to remove a column in cases where the header name is not a float
df = pd.DataFrame({'a' : [1,2,3], 10 : [3,4,5]})
df.dtypes
will give the output
a int64
10 int64
dtype: object
how can i remove the column a based on the fact that it's not a float or int?
|
69,188,407
|
Pandas category shows different behaviour when equating dtype = 'float64' and dtype = 'category'
|
<p>I try to use a loop to do some operations on the Pandas numeric and category columns.</p>
<pre><code>df = sns.load_dataset('diamonds')
print(df.dtypes,'\n')
carat float64
cut category
color category
clarity category
depth float64
table float64
price int64
x float64
y float64
z float64
dtype: object
</code></pre>
<p>In the following codes, I just simply cut and paste 'float64' and 'category' from the preceding step output.</p>
<pre><code>for i in df.columns:
if df[i].dtypes in ['float64']:
print(i)
for i in df.columns:
if df[i].dtypes in ['category']:
print(i)
</code></pre>
<p>I found that it works for 'float64' but generates an error for 'category'.</p>
<p>Why is this ? Thanks very much !!!</p>
<pre><code>carat
depth
table
x
y
z
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-74-8e6aa9d4726e> in <module>
4
5 for i in df.columns:
----> 6 if df[i].dtypes in ['category']:
7 print(i)
TypeError: data type 'category' not understood
</code></pre>
| 69,188,478
| 2021-09-15T06:58:50.353000
| 2
| null | 4
| 107
|
python|pandas
|
<h1>Solution</h1>
<p>Try using <code>pd.api.types.is_categorical_dtype</code>:</p>
<pre><code>for i in df.columns:
if pd.api.types.is_categorical_dtype(df[i]):
print(i)
</code></pre>
<p>Or check the <code>dtype</code> name:</p>
<pre><code>for i in df.columns:
if df[i].dtype.name == 'category':
print(i)
</code></pre>
<p>Output:</p>
<pre><code>cut
color
clarity
</code></pre>
<h1>Explanation:</h1>
<p>This is a bug in Pandas, <a href="https://github.com/pandas-dev/pandas/issues/16697" rel="nofollow noreferrer">here</a> is the GitHub issue, one sentence is:</p>
<blockquote>
<p><code>df.dtypes[colname] == 'category'</code> evaluates as <code>True</code> for categorical columns and raises <code>TypeError: data type "category"</code> not understood for <code>np.float64</code> columns.</p>
</blockquote>
<p>So actually, it works, it does give <code>True</code> for categorical columns, but the problem here is that the numpy <code>float64</code> dtype checking isn't cooperated with pandas dtypes, such as <code>category</code>.</p>
<p>If you make order the columns differently, having the first 3 columns as categorical dtype columns, it will show those column names, but once float columns come, it will raise error due to numpy and pandas type issue:</p>
<pre><code>>>> df = df.iloc[:, 1:]
>>> df
cut color clarity depth table price x y z
0 Ideal E SI2 61.5 55.0 326 3.95 3.98 2.43
1 Premium E SI1 59.8 61.0 326 3.89 3.84 2.31
2 Good E VS1 56.9 65.0 327 4.05 4.07 2.31
3 Premium I VS2 62.4 58.0 334 4.20 4.23 2.63
4 Good J SI2 63.3 58.0 335 4.34 4.35 2.75
... ... ... ... ... ... ... ... ... ...
53935 Ideal D SI1 60.8 57.0 2757 5.75 5.76 3.50
53936 Good D SI1 63.1 55.0 2757 5.69 5.75 3.61
53937 Very Good D SI1 62.8 60.0 2757 5.66 5.68 3.56
53938 Premium H SI2 61.0 58.0 2757 6.15 6.12 3.74
53939 Ideal D SI2 62.2 55.0 2757 5.83 5.87 3.64
[53940 rows x 9 columns]
>>> for i in df.columns:
if df[i].dtypes in ['category']:
print(i)
cut
color
clarity
Traceback (most recent call last):
File "<pyshell#138>", line 2, in <module>
if df[i].dtypes in ['category']:
TypeError: data type 'category' not understood
>>>
</code></pre>
<p>As you can see, it did output the columns, but once <code>np.float64</code> dtyped columns appear, the numpy <code>__eq__</code> magic method would throw an error from numpy backend.</p>
| 2021-09-15T07:04:17.403000
| 2
|
https://pandas.pydata.org/docs/user_guide/dsintro.html
|
Solution
Try using pd.api.types.is_categorical_dtype:
for i in df.columns:
if pd.api.types.is_categorical_dtype(df[i]):
print(i)
Or check the dtype name:
for i in df.columns:
if df[i].dtype.name == 'category':
print(i)
Output:
cut
color
clarity
Explanation:
This is a bug in Pandas, here is the GitHub issue, one sentence is:
df.dtypes[colname] == 'category' evaluates as True for categorical columns and raises TypeError: data type "category" not understood for np.float64 columns.
So actually, it works, it does give True for categorical columns, but the problem here is that the numpy float64 dtype checking isn't cooperated with pandas dtypes, such as category.
If you make order the columns differently, having the first 3 columns as categorical dtype columns, it will show those column names, but once float columns come, it will raise error due to numpy and pandas type issue:
>>> df = df.iloc[:, 1:]
>>> df
cut color clarity depth table price x y z
0 Ideal E SI2 61.5 55.0 326 3.95 3.98 2.43
1 Premium E SI1 59.8 61.0 326 3.89 3.84 2.31
2 Good E VS1 56.9 65.0 327 4.05 4.07 2.31
3 Premium I VS2 62.4 58.0 334 4.20 4.23 2.63
4 Good J SI2 63.3 58.0 335 4.34 4.35 2.75
... ... ... ... ... ... ... ... ... ...
53935 Ideal D SI1 60.8 57.0 2757 5.75 5.76 3.50
53936 Good D SI1 63.1 55.0 2757 5.69 5.75 3.61
53937 Very Good D SI1 62.8 60.0 2757 5.66 5.68 3.56
53938 Premium H SI2 61.0 58.0 2757 6.15 6.12 3.74
53939 Ideal D SI2 62.2 55.0 2757 5.83 5.87 3.64
[53940 rows x 9 columns]
>>> for i in df.columns:
if df[i].dtypes in ['category']:
print(i)
cut
color
clarity
Traceback (most recent call last):
File "<pyshell#138>", line 2, in <module>
if df[i].dtypes in ['category']:
TypeError: data type 'category' not understood
>>>
As you can see, it did output the columns, but once np.float64 dtyped columns appear, the numpy __eq__ magic method would throw an error from numpy backend.
| 0
| 2,241
|
Pandas category shows different behaviour when equating dtype = 'float64' and dtype = 'category'
I try to use a loop to do some operations on the Pandas numeric and category columns.
df = sns.load_dataset('diamonds')
print(df.dtypes,'\n')
carat float64
cut category
color category
clarity category
depth float64
table float64
price int64
x float64
y float64
z float64
dtype: object
In the following codes, I just simply cut and paste 'float64' and 'category' from the preceding step output.
for i in df.columns:
if df[i].dtypes in ['float64']:
print(i)
for i in df.columns:
if df[i].dtypes in ['category']:
print(i)
I found that it works for 'float64' but generates an error for 'category'.
Why is this ? Thanks very much !!!
carat
depth
table
x
y
z
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-74-8e6aa9d4726e> in <module>
4
5 for i in df.columns:
----> 6 if df[i].dtypes in ['category']:
7 print(i)
TypeError: data type 'category' not understood
|
65,209,505
|
How can I divide rows in the dataframe and rank them as new column?
|
<p>For example, I have a dataset that looks like this:</p>
<pre><code>d = {'Rank': [1, 2, 3, 4, 5, 6], 'Name': ["apple", "banana", "grape", "orange", "plum", "blackberry"]}
df = pd.DataFrame(data=d)
df.head()
</code></pre>
<p>And I want to get a new column "level" and based on the rank column, I want to have a new rank such as:
['High', 'Medium', 'Low'], so for my dataframe will look like this:</p>
<pre><code> Rank Name New Rank
0 1 apple High
1 2 banana High
2 3 grape Medium
3 4 orange Medium
4 5 plum Low
5 6 blackberry Low
</code></pre>
<p>I'm not sure the name of this type of calculation(?) so I've been searching using the keywords such as "rank" and "divide new rank", and I wasn't able to find the right help. (I saw a lot of <code>.rank()</code> but that was not what I wanted..) What is the name of this process and how can I achieve this?</p>
<p>Thank you in advance!</p>
| 65,209,543
| 2020-12-09T01:49:33.737000
| 1
| null | -1
| 107
|
python|pandas
|
<p>I would use the <code>cut</code> function, which essentially acts like a histogram labeler that you give custom labels to:</p>
<pre class="lang-py prettyprint-override"><code>df = pandas.DataFrame({'Rank': [1, 2, 3, 4, 5, 6]})
df['New Rank'] = pandas.cut(df['Rank'], bins=3, labels=['High', 'Medium', 'Low'])
</code></pre>
<pre><code> Rank New Rank
0 1 High
1 2 High
2 3 Medium
3 4 Medium
4 5 Low
5 6 Low
</code></pre>
| 2020-12-09T01:54:50.680000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.divide.html
|
pandas.DataFrame.divide#
pandas.DataFrame.divide#
DataFrame.divide(other, axis='columns', level=None, fill_value=None)[source]#
Get Floating division of dataframe and other, element-wise (binary operator truediv).
Equivalent to dataframe / other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rtruediv.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
I would use the cut function, which essentially acts like a histogram labeler that you give custom labels to:
df = pandas.DataFrame({'Rank': [1, 2, 3, 4, 5, 6]})
df['New Rank'] = pandas.cut(df['Rank'], bins=3, labels=['High', 'Medium', 'Low'])
Rank New Rank
0 1 High
1 2 High
2 3 Medium
3 4 Medium
4 5 Low
5 6 Low
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
| 962
| 1,326
|
How can I divide rows in the dataframe and rank them as new column?
For example, I have a dataset that looks like this:
d = {'Rank': [1, 2, 3, 4, 5, 6], 'Name': ["apple", "banana", "grape", "orange", "plum", "blackberry"]}
df = pd.DataFrame(data=d)
df.head()
And I want to get a new column "level" and based on the rank column, I want to have a new rank such as:
['High', 'Medium', 'Low'], so for my dataframe will look like this:
Rank Name New Rank
0 1 apple High
1 2 banana High
2 3 grape Medium
3 4 orange Medium
4 5 plum Low
5 6 blackberry Low
I'm not sure the name of this type of calculation(?) so I've been searching using the keywords such as "rank" and "divide new rank", and I wasn't able to find the right help. (I saw a lot of .rank() but that was not what I wanted..) What is the name of this process and how can I achieve this?
Thank you in advance!
|
69,976,966
|
pandas - find index distance between batches of equal values in a row
|
<p>I would like to find the "distance" between the starting points of two batches of <code>1</code>'s in a row or in other words the length of batches of "<code>1</code>'s followed by <code>0</code>'s" (indicated with spaces below).</p>
<p>So I start with the following series:</p>
<pre><code>df = pd.Series([0,0, 1,1,1,0,0, 1,1,0, 1,1,1,0,0,0,0, 1,1,1,0,0,0, 1,1,0,0])
</code></pre>
<p>and would like to get the following output:</p>
<pre><code>0 NaN
1 5.0
2 3.0
3 7.0
4 6.0
5 NaN
</code></pre>
<p>I know how to get either the counts of the number of <code>1</code>'s in a row or the counts of the number of <code>0</code>'s in a row but I don't know how to deal with the case of this pattern of <code>1</code>'s followed by <code>0</code>'s as a pattern for its own...</p>
<p>Having NaN's at the beginning and end would be the ideal case but is not necessary.</p>
| 69,977,052
| 2021-11-15T15:36:11.653000
| 1
| null | 1
| 107
|
python|pandas
|
<p>Use <code>diff()</code> to find the difference, <code>1</code> indicates starting of a new batch. Then you can use <code>np.diff</code> on the index:</p>
<pre><code>s = df.diff().eq(1)
np.diff(s.index[s])
# or a one-liner
# np.diff(np.where(df.diff().eq(1))[0])
</code></pre>
<p>Output:</p>
<pre><code>array([5, 3, 7, 6])
</code></pre>
<p><strong>Note</strong> There is an edge case where the series starts with a <code>1</code>.</p>
| 2021-11-15T15:42:11.807000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.cumsum.html
|
pandas.DataFrame.cumsum#
pandas.DataFrame.cumsum#
DataFrame.cumsum(axis=None, skipna=True, *args, **kwargs)[source]#
Return cumulative sum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
sum.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The index or the name of the axis. 0 is equivalent to None or ‘index’.
Use diff() to find the difference, 1 indicates starting of a new batch. Then you can use np.diff on the index:
s = df.diff().eq(1)
np.diff(s.index[s])
# or a one-liner
# np.diff(np.where(df.diff().eq(1))[0])
Output:
array([5, 3, 7, 6])
Note There is an edge case where the series starts with a 1.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargsAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
Series or DataFrameReturn cumulative sum of Series or DataFrame.
See also
core.window.expanding.Expanding.sumSimilar functionality but ignores NaN values.
DataFrame.sumReturn the sum over DataFrame axis.
DataFrame.cummaxReturn cumulative maximum over DataFrame axis.
DataFrame.cumminReturn cumulative minimum over DataFrame axis.
DataFrame.cumsumReturn cumulative sum over DataFrame axis.
DataFrame.cumprodReturn cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cumsum()
0 2.0
1 NaN
2 7.0
3 6.0
4 6.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cumsum(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the sum
in each column. This is equivalent to axis=None or axis='index'.
>>> df.cumsum()
A B
0 2.0 1.0
1 5.0 NaN
2 6.0 1.0
To iterate over columns and find the sum in each row,
use axis=1
>>> df.cumsum(axis=1)
A B
0 2.0 3.0
1 3.0 NaN
2 1.0 1.0
| 383
| 682
|
pandas - find index distance between batches of equal values in a row
I would like to find the "distance" between the starting points of two batches of 1's in a row or in other words the length of batches of "1's followed by 0's" (indicated with spaces below).
So I start with the following series:
df = pd.Series([0,0, 1,1,1,0,0, 1,1,0, 1,1,1,0,0,0,0, 1,1,1,0,0,0, 1,1,0,0])
and would like to get the following output:
0 NaN
1 5.0
2 3.0
3 7.0
4 6.0
5 NaN
I know how to get either the counts of the number of 1's in a row or the counts of the number of 0's in a row but I don't know how to deal with the case of this pattern of 1's followed by 0's as a pattern for its own...
Having NaN's at the beginning and end would be the ideal case but is not necessary.
|
67,977,606
|
Pandas: Subtract timestamps
|
<p>I grouped a dataframe <code>test_df2</code> by frequency <code>'B'</code> (by business day, so each name of the group is the date of that day at 00:00) and am now looping over the groups to calculate timestamp differences and save them in the dict <code>grouped_bins</code>. The data in the original dataframe and the groups looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>timestamp</th>
<th>status</th>
<th>externalId</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>2020-05-11 13:06:05.922</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>7</td>
<td>2020-05-11 13:14:29.759</td>
<td>10</td>
<td>1</td>
</tr>
<tr>
<td>8</td>
<td>2020-05-11 13:16:09.147</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>16</td>
<td>2020-05-11 13:19:08.641</td>
<td>10</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>What I want is to calculate the difference between each row's <code>timestamp</code>, for example of rows <code>7</code> and <code>0</code>, since they have the same <code>externalId</code>.</p>
<p>What I did for that purpose is the following.</p>
<pre><code># Group function. Dataframes are saved in a dict.
def groupDataWithFrequency(self, dataFrameLabel: str, groupKey: str, frequency: str):
'''Group time series by frequency. Starts at the beginning of the data frame.'''
print(f"Binning {dataFrameLabel} data with frequency {frequency}")
if (isinstance(groupKey, str)):
return self.dataDict[dataFrameLabel].groupby(pd.Grouper(key=groupKey, freq=frequency, origin="start"))
grouped_jobstates = groupDataWithFrequency("jobStatus", "timestamp", frequency)
</code></pre>
<p>After grouping, I loop over each group (to maintain the day) and try to calculate the difference between the timestamos, which is where it goes wrong.</p>
<pre><code>grouped_bins = {}
def jobStatusPRAggregator(data, name):
if (data["status"] == 1):
# Find corresponding element in original dataframe
correspondingStatus = test_df2.loc[(test_df2["externalId"] == data["externalId"]) & (test_df2["timestamp"] != data["timestamp"])]
# Calculate time difference
time = correspondingStatus["timestamp"] - data["timestamp"]
# some prints:
print(type(time))
# <class 'pandas.core.series.Series'> --> Why is this a series?
print(time.array[0])
# 0 days 00:08:23.837000 --> This looks correct, I think?
print(time)
# 7 0 days 00:08:23.837000
# Name: timestamp, dtype: timedelta64[ns]
# Check if element exists in dict
elem = next((x for x in grouped_bins if ((x["startDate"] == name) ("productiveTime" in x))), None)
# If does not exist yet, add to dict
if elem is None:
grouped_bins.append( {"startDate": name, "productiveTime": time })
else:
elem["productiveTime"] = elem["productiveTime"]
# See below for problem
# Loop over groups
for name, group in grouped_jobstates:
group.apply(jobStatusPRAggregator, args=(name,), axis=1)
</code></pre>
<p>The problem I face is the following. The element in the dict (<code>elem</code>) looks like this in the end:</p>
<pre><code>{'startDate': Timestamp('2020-05-11 00:00:00', freq='B'), 'productiveTime': 0 NaT
7 NaT
8 NaT
16 NaT
17 NaT
..
1090 NaT
1091 NaT
1099 NaT
1100 NaT
1107 NaT
Name: timestamp, Length: 254, dtype: timedelta64[ns]}
</code></pre>
<p>What I want is something like this:</p>
<pre><code>{'startDate': Timestamp('2020-05-11 00:00:00', freq='B'), 'productiveTime': 2 Days 12 hours 39 minutes 29 seconds
Name: timestamp, Length: 254, dtype: timedelta64[ns]}
</code></pre>
<p>Though I am open to suggestions on how to store time durations in Python/Pandas.</p>
<p>I am also open to suggestions regarding the loop itself.</p>
| 67,977,853
| 2021-06-14T21:45:33.893000
| 2
| null | 0
| 365
|
python|pandas
|
<p>To obtain timestamp differences between consecutive rows of the same <code>externalId</code>, you should be able to simply write, for example:</p>
<pre class="lang-py prettyprint-override"><code>df2 = df.assign(delta=df.groupby('externalId')['timestamp'].diff())
</code></pre>
<p>On the example you give:</p>
<pre class="lang-py prettyprint-override"><code>>>> df2
timestamp status externalId delta
0 2020-05-11 13:06:05.922 1 1 NaT
7 2020-05-11 13:14:29.759 10 1 0 days 00:08:23.837000
8 2020-05-11 13:16:09.147 1 2 NaT
16 2020-05-11 13:19:08.641 10 2 0 days 00:02:59.494000
</code></pre>
<p>If your timestamps are not actually of type <code>Timestamp</code> yet, then you can convert them into it first:</p>
<pre class="lang-py prettyprint-override"><code>df['timestamp'] = pd.to_datetime(df['timestamp'])
</code></pre>
| 2021-06-14T22:20:23.633000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.Timestamp.today.html
|
To obtain timestamp differences between consecutive rows of the same externalId, you should be able to simply write, for example:
df2 = df.assign(delta=df.groupby('externalId')['timestamp'].diff())
On the example you give:
>>> df2
timestamp status externalId delta
0 2020-05-11 13:06:05.922 1 1 NaT
7 2020-05-11 13:14:29.759 10 1 0 days 00:08:23.837000
8 2020-05-11 13:16:09.147 1 2 NaT
16 2020-05-11 13:19:08.641 10 2 0 days 00:02:59.494000
If your timestamps are not actually of type Timestamp yet, then you can convert them into it first:
df['timestamp'] = pd.to_datetime(df['timestamp'])
| 0
| 732
|
Pandas: Subtract timestamps
I grouped a dataframe test_df2 by frequency 'B' (by business day, so each name of the group is the date of that day at 00:00) and am now looping over the groups to calculate timestamp differences and save them in the dict grouped_bins. The data in the original dataframe and the groups looks like this:
timestamp
status
externalId
0
2020-05-11 13:06:05.922
1
1
7
2020-05-11 13:14:29.759
10
1
8
2020-05-11 13:16:09.147
1
2
16
2020-05-11 13:19:08.641
10
2
What I want is to calculate the difference between each row's timestamp, for example of rows 7 and 0, since they have the same externalId.
What I did for that purpose is the following.
# Group function. Dataframes are saved in a dict.
def groupDataWithFrequency(self, dataFrameLabel: str, groupKey: str, frequency: str):
'''Group time series by frequency. Starts at the beginning of the data frame.'''
print(f"Binning {dataFrameLabel} data with frequency {frequency}")
if (isinstance(groupKey, str)):
return self.dataDict[dataFrameLabel].groupby(pd.Grouper(key=groupKey, freq=frequency, origin="start"))
grouped_jobstates = groupDataWithFrequency("jobStatus", "timestamp", frequency)
After grouping, I loop over each group (to maintain the day) and try to calculate the difference between the timestamos, which is where it goes wrong.
grouped_bins = {}
def jobStatusPRAggregator(data, name):
if (data["status"] == 1):
# Find corresponding element in original dataframe
correspondingStatus = test_df2.loc[(test_df2["externalId"] == data["externalId"]) & (test_df2["timestamp"] != data["timestamp"])]
# Calculate time difference
time = correspondingStatus["timestamp"] - data["timestamp"]
# some prints:
print(type(time))
# <class 'pandas.core.series.Series'> --> Why is this a series?
print(time.array[0])
# 0 days 00:08:23.837000 --> This looks correct, I think?
print(time)
# 7 0 days 00:08:23.837000
# Name: timestamp, dtype: timedelta64[ns]
# Check if element exists in dict
elem = next((x for x in grouped_bins if ((x["startDate"] == name) ("productiveTime" in x))), None)
# If does not exist yet, add to dict
if elem is None:
grouped_bins.append( {"startDate": name, "productiveTime": time })
else:
elem["productiveTime"] = elem["productiveTime"]
# See below for problem
# Loop over groups
for name, group in grouped_jobstates:
group.apply(jobStatusPRAggregator, args=(name,), axis=1)
The problem I face is the following. The element in the dict (elem) looks like this in the end:
{'startDate': Timestamp('2020-05-11 00:00:00', freq='B'), 'productiveTime': 0 NaT
7 NaT
8 NaT
16 NaT
17 NaT
..
1090 NaT
1091 NaT
1099 NaT
1100 NaT
1107 NaT
Name: timestamp, Length: 254, dtype: timedelta64[ns]}
What I want is something like this:
{'startDate': Timestamp('2020-05-11 00:00:00', freq='B'), 'productiveTime': 2 Days 12 hours 39 minutes 29 seconds
Name: timestamp, Length: 254, dtype: timedelta64[ns]}
Though I am open to suggestions on how to store time durations in Python/Pandas.
I am also open to suggestions regarding the loop itself.
|
68,352,573
|
Copy values from one column to another column with different rows based on two conditions
|
<p>my dataframe looks basically like this:</p>
<pre><code>data = [[11200, 33000,dt.datetime(1995,3,1),10,np.nan], [11200, 33000, dt.datetime(1995,3,2),11, np.nan],[11200, 33000, dt.datetime(1995,3,3),9, np.nan],\
[23400, 45000, dt.datetime(1995,3,1),50, np.nan], [23400, 45000, dt.datetime(1995,3,3),49, np.nan], [33000, 55000, dt.datetime(1995,3,1),60, np.nan], [33000, 55000, dt.datetime(1995,3,2),61, np.nan]]
df = pd.DataFrame(data, columns = ["Identifier", "Identifier2" ,"date", "price","price2"])
</code></pre>
<p>Output looks like:</p>
<pre><code>index Identifier1 Identifier2 date price1 price2
0 11200 33000 1995-03-01 10 nan
1 11200 33000 1995-03-02 11 nan
2 11200 33000 1995-03-03 9 nan
3 23400 45000 1995-03-01 50 nan
4 23400 45000 1995-03-03 49 nan
5 33000 55000 1995-03-01 60 nan
6 33000 55000 1995-03-02 61 nan
</code></pre>
<p>Please note that my index is not sorted by ascending numbers like to one of my example df.
I would like to: look for the number that is in column Identifier2 (I know the exact number I want to look up) in column Identifier 1. Then copy the value of price1 into price2 with respect to correct dates, because some dates are missing.</p>
<p>My goal would look like this:</p>
<pre><code> index Identifier1 Identifier2 date price1 price2
0 11200 33000 1995-03-01 10 60
1 11200 33000 1995-03-02 11 61
2 11200 33000 1995-03-03 9 nan
3 23400 45000 1995-03-01 50 nan
4 23400 45000 1995-03-03 49 nan
5 33000 55000 1995-03-01 60 nan
6 33000 55000 1995-03-02 61 nan
</code></pre>
<p>I'm sure this is not too difficult, but somehow I don't get it.
Thank you very much in advance for any help.</p>
| 68,352,801
| 2021-07-12T18:51:07.037000
| 2
| null | 2
| 125
|
python|pandas
|
<p>One way:</p>
<pre><code>df['price2'] = df[['Identifier2', 'date']].apply(tuple, 1).map(df.set_index(['Identifier','date'])['price'].to_dict())
</code></pre>
<p>OUTPUT:</p>
<pre><code> Identifier Identifier2 date price price2
0 11200 33000 1995-03-01 10 60.0
1 11200 33000 1995-03-02 11 61.0
2 11200 33000 1995-03-03 9 NaN
3 23400 45000 1995-03-01 50 NaN
4 23400 45000 1995-03-03 49 NaN
5 33000 55000 1995-03-01 60 NaN
6 33000 55000 1995-03-02 61 NaN
</code></pre>
| 2021-07-12T19:13:09.653000
| 2
|
https://pandas.pydata.org/docs/dev/user_guide/merging.html
|
Merge, join, concatenate and compare#
One way:
df['price2'] = df[['Identifier2', 'date']].apply(tuple, 1).map(df.set_index(['Identifier','date'])['price'].to_dict())
OUTPUT:
Identifier Identifier2 date price price2
0 11200 33000 1995-03-01 10 60.0
1 11200 33000 1995-03-02 11 61.0
2 11200 33000 1995-03-03 9 NaN
3 23400 45000 1995-03-01 50 NaN
4 23400 45000 1995-03-03 49 NaN
5 33000 55000 1995-03-01 60 NaN
6 33000 55000 1995-03-02 61 NaN
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 39
| 600
|
Copy values from one column to another column with different rows based on two conditions
my dataframe looks basically like this:
data = [[11200, 33000,dt.datetime(1995,3,1),10,np.nan], [11200, 33000, dt.datetime(1995,3,2),11, np.nan],[11200, 33000, dt.datetime(1995,3,3),9, np.nan],\
[23400, 45000, dt.datetime(1995,3,1),50, np.nan], [23400, 45000, dt.datetime(1995,3,3),49, np.nan], [33000, 55000, dt.datetime(1995,3,1),60, np.nan], [33000, 55000, dt.datetime(1995,3,2),61, np.nan]]
df = pd.DataFrame(data, columns = ["Identifier", "Identifier2" ,"date", "price","price2"])
Output looks like:
index Identifier1 Identifier2 date price1 price2
0 11200 33000 1995-03-01 10 nan
1 11200 33000 1995-03-02 11 nan
2 11200 33000 1995-03-03 9 nan
3 23400 45000 1995-03-01 50 nan
4 23400 45000 1995-03-03 49 nan
5 33000 55000 1995-03-01 60 nan
6 33000 55000 1995-03-02 61 nan
Please note that my index is not sorted by ascending numbers like to one of my example df.
I would like to: look for the number that is in column Identifier2 (I know the exact number I want to look up) in column Identifier 1. Then copy the value of price1 into price2 with respect to correct dates, because some dates are missing.
My goal would look like this:
index Identifier1 Identifier2 date price1 price2
0 11200 33000 1995-03-01 10 60
1 11200 33000 1995-03-02 11 61
2 11200 33000 1995-03-03 9 nan
3 23400 45000 1995-03-01 50 nan
4 23400 45000 1995-03-03 49 nan
5 33000 55000 1995-03-01 60 nan
6 33000 55000 1995-03-02 61 nan
I'm sure this is not too difficult, but somehow I don't get it.
Thank you very much in advance for any help.
|
59,694,412
|
Pandas groupby and remove row if group have number > than threshold
|
<pre><code>Group Col2 Col3
Grp1 1
Grp1 1
Grp1 1
Grp1 2
Grp1 3
Grp1 3
Grp2 1
Grp2 1
Grp2 1
Grp3 1
Grp3 2
Grp3 3
Grp4 1
</code></pre>
<p>And I would like to groupby groups and remove all groups from the data frame where the number in Col2 exceed 2
Here I should get : </p>
<pre><code>Group Col2
Grp2 1
Grp2 1
Grp2 1
Grp4 1
</code></pre>
<p>does anyone have an idea ? </p>
| 59,694,449
| 2020-01-11T12:31:17.170000
| 2
| null | 2
| 386
|
python|pandas
|
<p>Use functions for compare like <code>eq</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> and test if all values match by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.all.html" rel="nofollow noreferrer"><code>GroupBy.all</code></a>:</p>
<pre><code>df1 = df[df['Col2'].eq(1).groupby(df['Group']).transform('all')]
print (df1)
Group Col2 Col3
6 Grp2 1 NaN
7 Grp2 1 NaN
8 Grp2 1 NaN
12 Grp4 1 NaN
</code></pre>
<p>Or get all groups where is at least one non match value with <code>ne</code> and filter by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a> with inverted mask by <code>~</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>df = df[~df['Group'].isin(df.loc[df['Col2'].ne(1), 'Group'])]
print (df)
Group Col2 Col3
6 Grp2 1 NaN
7 Grp2 1 NaN
8 Grp2 1 NaN
12 Grp4 1 NaN
</code></pre>
<p>If want compare for less like 2 values:</p>
<pre><code>#get groups less like 2
df1 = df[df['Col2'].lt(2).groupby(df['Group']).transform('all')]
#remove groups greater of equal like 2
df1 = df[~df['Group'].isin(df.loc[df['Col2'].ge(2), 'Group'])]
</code></pre>
<hr>
<p>List of all functions for compare:</p>
<ul>
<li><a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.lt.html" rel="nofollow noreferrer"><code>Series.lt</code></a> - <code><</code> </li>
<li><a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.gt.html" rel="nofollow noreferrer"><code>Series.gt</code></a> - <code>></code> </li>
<li><a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.le.html" rel="nofollow noreferrer"><code>Series.le</code></a> - <code><=</code></li>
<li><a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ge.html" rel="nofollow noreferrer"><code>Series.ge</code></a> - <code>>=</code></li>
<li><a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ne.html" rel="nofollow noreferrer"><code>Series.ne</code></a> - <code>!=</code></li>
<li><a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>Series.eq</code></a> - <code>==</code></li>
</ul>
| 2020-01-11T12:35:26.573000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.filter.html
|
pandas.core.groupby.DataFrameGroupBy.filter#
pandas.core.groupby.DataFrameGroupBy.filter#
Use functions for compare like eq with GroupBy.transform and test if all values match by GroupBy.all:
df1 = df[df['Col2'].eq(1).groupby(df['Group']).transform('all')]
print (df1)
Group Col2 Col3
6 Grp2 1 NaN
7 Grp2 1 NaN
8 Grp2 1 NaN
12 Grp4 1 NaN
Or get all groups where is at least one non match value with ne and filter by Series.isin with inverted mask by ~ in boolean indexing:
df = df[~df['Group'].isin(df.loc[df['Col2'].ne(1), 'Group'])]
print (df)
Group Col2 Col3
6 Grp2 1 NaN
7 Grp2 1 NaN
8 Grp2 1 NaN
12 Grp4 1 NaN
If want compare for less like 2 values:
#get groups less like 2
df1 = df[df['Col2'].lt(2).groupby(df['Group']).transform('all')]
#remove groups greater of equal like 2
df1 = df[~df['Group'].isin(df.loc[df['Col2'].ge(2), 'Group'])]
List of all functions for compare:
Series.lt - <
Series.gt - >
Series.le - <=
Series.ge - >=
Series.ne - !=
Series.eq - ==
DataFrameGroupBy.filter(func, dropna=True, *args, **kwargs)[source]#
Return a copy of a DataFrame excluding filtered elements.
Elements from groups are filtered if they do not satisfy the
boolean criterion specified by func.
Parameters
funcfunctionFunction to apply to each subframe. Should return True or False.
dropnaDrop groups that do not pass the filter. True by default;If False, groups that evaluate False are filled with NaNs.
Returns
filteredDataFrame
Notes
Each subframe is endowed the attribute ‘name’ in case you need to know
which group you are working on.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
Examples
>>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
... 'foo', 'bar'],
... 'B' : [1, 2, 3, 4, 5, 6],
... 'C' : [2.0, 5., 8., 1., 2., 9.]})
>>> grouped = df.groupby('A')
>>> grouped.filter(lambda x: x['B'].mean() > 3.)
A B C
1 bar 2 5.0
3 bar 4 1.0
5 bar 6 9.0
| 92
| 1,050
|
Pandas groupby and remove row if group have number > than threshold
Group Col2 Col3
Grp1 1
Grp1 1
Grp1 1
Grp1 2
Grp1 3
Grp1 3
Grp2 1
Grp2 1
Grp2 1
Grp3 1
Grp3 2
Grp3 3
Grp4 1
And I would like to groupby groups and remove all groups from the data frame where the number in Col2 exceed 2
Here I should get :
Group Col2
Grp2 1
Grp2 1
Grp2 1
Grp4 1
does anyone have an idea ?
|
70,264,126
|
Group by a column and compare dates: Pandas
|
<p>I have the following data frame.</p>
<pre><code>ID Date1 Date2
1 7-12-2021 20-11-2021
1 10-11-2021 01-12-2021
2 22-10-2021 03-12-2021
</code></pre>
<p>My idea is based on duplicate value of the <code>ID</code> column compare the two dates and keep the row if <code>Date2</code> is earlier than <code>Date1</code>. If the value of <code>ID</code> is unique, no need to do comparison and keep the value as it is.</p>
<p>I would like to get the following output.</p>
<pre><code>ID Date1 Date2
1 10-11-2021 01-12-2021
2 22-10-2021 03-12-2021
</code></pre>
<p>I have tried this like the following but not succeed.</p>
<pre><code>df = df.groupby(['ID'])[(df['Date1']) < (df['Date2'])]
</code></pre>
<p>Can any one help me with this?</p>
| 70,264,346
| 2021-12-07T17:02:45.570000
| 2
| null | 3
| 140
|
python|pandas
|
<p>I would first start with making sure your Date columns are of datetime type, and then check for duplicates in the ID column and whether Date2 precedes Date1 and drop if that's the case:</p>
<pre><code># Convert to datetime
df['Date1'] = pd.to_datetime(df['Date1'])
df['Date2'] = pd.to_datetime(df['Date2'])
# Mark what you need to drop
df.loc[df.ID.duplicated(keep=False),'ind'] = 'dup'
df['ind'] = np.where((df.ind.eq('dup')) & (df['Date2'] > df['Date1']),'Drop','Keep')
>>> print(df.loc[df['ind'].eq('Keep')].drop('ind',axis=1))
ID Date1 Date2
1 1 2021-10-11 2021-01-12
2 2 2021-10-22 2021-03-12
</code></pre>
| 2021-12-07T17:17:53.160000
| 2
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
I would first start with making sure your Date columns are of datetime type, and then check for duplicates in the ID column and whether Date2 precedes Date1 and drop if that's the case:
# Convert to datetime
df['Date1'] = pd.to_datetime(df['Date1'])
df['Date2'] = pd.to_datetime(df['Date2'])
# Mark what you need to drop
df.loc[df.ID.duplicated(keep=False),'ind'] = 'dup'
df['ind'] = np.where((df.ind.eq('dup')) & (df['Date2'] > df['Date1']),'Drop','Keep')
>>> print(df.loc[df['ind'].eq('Keep')].drop('ind',axis=1))
ID Date1 Date2
1 1 2021-10-11 2021-01-12
2 2 2021-10-22 2021-03-12
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 459
| 1,061
|
Group by a column and compare dates: Pandas
I have the following data frame.
ID Date1 Date2
1 7-12-2021 20-11-2021
1 10-11-2021 01-12-2021
2 22-10-2021 03-12-2021
My idea is based on duplicate value of the ID column compare the two dates and keep the row if Date2 is earlier than Date1. If the value of ID is unique, no need to do comparison and keep the value as it is.
I would like to get the following output.
ID Date1 Date2
1 10-11-2021 01-12-2021
2 22-10-2021 03-12-2021
I have tried this like the following but not succeed.
df = df.groupby(['ID'])[(df['Date1']) < (df['Date2'])]
Can any one help me with this?
|
67,990,201
|
Pandas: How to correctly parse a dashed text flat file into a dataframe
|
<p>I have one ugly text file that looks like this:</p>
<pre><code>2020-06-13
----------------------------------------
|Order |Warehouse|Stock|Price|Vendor|
|--------------------------------------|
|31434 |WA12 |200 |160 |AS12 |
|31435 |WA11 |26 |12 |AS11 |
|31436 |WA13 |202 |161 |AS16 |
----------------------------------------
2020-06-14
----------------------------------------
|Order |Warehouse|Stock|Price|Vendor|
|--------------------------------------|
|31437 |WA12 |200 |160 |AS12 |
|31438 |WA11 |26 |12 |AS11 |
|31439 |WA13 |202 |161 |AS16 |
----------------------------------------
</code></pre>
<p>I try to parse it using the read_csv function like so:</p>
<pre><code>df = pd.read_csv(file, sep='|', error_bad_lines=False)
</code></pre>
<p>But my dataframe comes out with a lot of notable problems:</p>
<p>First problem, the first dashed lines (before the headers) and I also assume the last dashed lines (the footer) get inserted into the <em>Material</em> column and every other field in that row (Warehouse, Stock, Price, Vendor) is set to NaN</p>
<ul>
<li><p>Is there a way to skip these dashed lines: <code>----------------------------------------</code> ?</p>
</li>
<li><p>Second problem, is the separator | which shows up before and after the headers. Before <em>Order</em> and after <em>Vendor</em></p>
<p>|<em>Order</em> |Warehouse|Stock|Price|<em>Vendor</em>|</p>
<p>This make pandas return two NaN headers with empty values inside of them</p>
</li>
<li><p>Third problem is how to ignore this line |--------------------------------------| as well?</p>
</li>
</ul>
<p>Sorry if this question sounds confusing. I'm confused as well if this is the proper way to do it by changing the parameters the read_csv function takes or do I need to normalize/sanitize/clean the file before sending it to the read_csv function by doing some pure python?</p>
<p>Thank you</p>
| 67,991,948
| 2021-06-15T16:33:25.993000
| 2
| null | 2
| 153
|
python|pandas
|
<p>This data is really tricky as you said, So, we have to pay logic another way around:</p>
<h2>raw_data as provided:</h2>
<pre><code>$ cat weired_data
----------------------------------------
|Order |Warehouse|Stock|Price|Vendor|
|--------------------------------------|
|31434 |WA12 |200 |160 |AS12 |
|31435 |WA11 |26 |12 |AS11 |
|31436 |WA13 |202 |161 |AS16 |
----------------------------------------
2020-06-14
----------------------------------------
|Order |Warehouse|Stock|Price|Vendor|
|--------------------------------------|
|31437 |WA12 |200 |160 |AS12 |
|31438 |WA11 |26 |12 |AS11 |
|31439 |WA13 |202 |161 |AS16 |
----------------------------------------
</code></pre>
<h2>Cleaning data to form DataFrame:</h2>
<p>This is how you can parse your data to create a desired DataFrame:</p>
<pre><code>#!/home/nxf59093/Karn_python3/bin/python
import pandas as pd
##### Python pandas, widen output display to see more columns. ####
#pd.set_option('display.height', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('expand_frame_repr', True)
##################################################################
# read the with `read_csv`
df = pd.read_csv("weired_data")
# replace all hyphens and pipes using `str.replace`
df['2020-06-13'] = df['2020-06-13'].str.replace('|', '')
df['2020-06-13'] = df['2020-06-13'].str.replace('-', '')
# as every row contains `31` and that's the way to pick the lines
df = df[df['2020-06-13'].str.contains('31')]
# as default it has `2020-06-13` column name hence we'll Limit
# number of splits in output by using `n=5` as we need 5 columns which
# will return ` 0 1 2 3 4` columns names
df = df['2020-06-13'].str.split(n=5, expand=True)
# as we have got column names which we can rename as we want them to be
df = df.rename(columns={0: 'Order', 1: 'Warehouse', 2: 'Stock', 3: 'Price', 4: 'Vendor'})
print(df)
</code></pre>
<h2>Desired Parsed DataFrame:</h2>
<pre><code> $ ./test_data.py
Order Warehouse Stock Price Vendor
3 31434 WA12 200 160 AS12
4 31435 WA11 26 12 AS11
5 31436 WA13 202 161 AS16
11 31437 WA12 200 160 AS12
12 31438 WA11 26 12 AS11
13 31439 WA13 202 161 AS16
</code></pre>
<p><strong>Another Method with less slicing:</strong></p>
<pre><code>df = pd.read_csv("weired_data", sep="|", skiprows=2, index_col=0)
df = df.dropna(axis=0,how='all')
df.columns = df.columns.str.replace(' ', '')
df = df[df['Order'].str.contains('31')]
df = df.reset_index()
df = df.dropna(axis=1,how='all')
print(df)
</code></pre>
<p><strong>Result:</strong></p>
<pre><code> Order Warehouse Stock Price Vendor
0 31434 WA12 200 160 AS12
1 31435 WA11 26 12 AS11
2 31436 WA13 202 161 AS16
3 31437 WA12 200 160 AS12
4 31438 WA11 26 12 AS11
5 31439 WA13 202 161 AS16
</code></pre>
<p>In case you want your data to be looked like as sql tables then you need to use tabulate and print that .</p>
<pre><code>from tabulate import tabulate
print(tabulate(df, headers='keys', tablefmt='psql', showindex=False))
+-----------+-------------+---------+---------+----------+
| Order | Warehouse | Stock | Price | Vendor |
|-----------+-------------+---------+---------+----------|
| 31434 | WA12 | 200 | 160 | AS12 |
| 31435 | WA11 | 26 | 12 | AS11 |
| 31436 | WA13 | 202 | 161 | AS16 |
| 31437 | WA12 | 200 | 160 | AS12 |
| 31438 | WA11 | 26 | 12 | AS11 |
| 31439 | WA13 | 202 | 161 | AS16 |
+-----------+-------------+---------+---------+----------+
</code></pre>
| 2021-06-15T18:51:54.287000
| 2
|
https://pandas.pydata.org/docs/user_guide/io.html
|
This data is really tricky as you said, So, we have to pay logic another way around:
raw_data as provided:
$ cat weired_data
----------------------------------------
|Order |Warehouse|Stock|Price|Vendor|
|--------------------------------------|
|31434 |WA12 |200 |160 |AS12 |
|31435 |WA11 |26 |12 |AS11 |
|31436 |WA13 |202 |161 |AS16 |
----------------------------------------
2020-06-14
----------------------------------------
|Order |Warehouse|Stock|Price|Vendor|
|--------------------------------------|
|31437 |WA12 |200 |160 |AS12 |
|31438 |WA11 |26 |12 |AS11 |
|31439 |WA13 |202 |161 |AS16 |
----------------------------------------
Cleaning data to form DataFrame:
This is how you can parse your data to create a desired DataFrame:
#!/home/nxf59093/Karn_python3/bin/python
import pandas as pd
##### Python pandas, widen output display to see more columns. ####
#pd.set_option('display.height', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('expand_frame_repr', True)
##################################################################
# read the with `read_csv`
df = pd.read_csv("weired_data")
# replace all hyphens and pipes using `str.replace`
df['2020-06-13'] = df['2020-06-13'].str.replace('|', '')
df['2020-06-13'] = df['2020-06-13'].str.replace('-', '')
# as every row contains `31` and that's the way to pick the lines
df = df[df['2020-06-13'].str.contains('31')]
# as default it has `2020-06-13` column name hence we'll Limit
# number of splits in output by using `n=5` as we need 5 columns which
# will return ` 0 1 2 3 4` columns names
df = df['2020-06-13'].str.split(n=5, expand=True)
# as we have got column names which we can rename as we want them to be
df = df.rename(columns={0: 'Order', 1: 'Warehouse', 2: 'Stock', 3: 'Price', 4: 'Vendor'})
print(df)
Desired Parsed DataFrame:
$ ./test_data.py
Order Warehouse Stock Price Vendor
3 31434 WA12 200 160 AS12
4 31435 WA11 26 12 AS11
5 31436 WA13 202 161 AS16
11 31437 WA12 200 160 AS12
12 31438 WA11 26 12 AS11
13 31439 WA13 202 161 AS16
Another Method with less slicing:
df = pd.read_csv("weired_data", sep="|", skiprows=2, index_col=0)
df = df.dropna(axis=0,how='all')
df.columns = df.columns.str.replace(' ', '')
df = df[df['Order'].str.contains('31')]
df = df.reset_index()
df = df.dropna(axis=1,how='all')
print(df)
Result:
Order Warehouse Stock Price Vendor
0 31434 WA12 200 160 AS12
1 31435 WA11 26 12 AS11
2 31436 WA13 202 161 AS16
3 31437 WA12 200 160 AS12
4 31438 WA11 26 12 AS11
5 31439 WA13 202 161 AS16
In case you want your data to be looked like as sql tables then you need to use tabulate and print that .
from tabulate import tabulate
print(tabulate(df, headers='keys', tablefmt='psql', showindex=False))
+-----------+-------------+---------+---------+----------+
| Order | Warehouse | Stock | Price | Vendor |
|-----------+-------------+---------+---------+----------|
| 31434 | WA12 | 200 | 160 | AS12 |
| 31435 | WA11 | 26 | 12 | AS11 |
| 31436 | WA13 | 202 | 161 | AS16 |
| 31437 | WA12 | 200 | 160 | AS12 |
| 31438 | WA11 | 26 | 12 | AS11 |
| 31439 | WA13 | 202 | 161 | AS16 |
+-----------+-------------+---------+---------+----------+
| 0
| 3,679
|
Pandas: How to correctly parse a dashed text flat file into a dataframe
I have one ugly text file that looks like this:
2020-06-13
----------------------------------------
|Order |Warehouse|Stock|Price|Vendor|
|--------------------------------------|
|31434 |WA12 |200 |160 |AS12 |
|31435 |WA11 |26 |12 |AS11 |
|31436 |WA13 |202 |161 |AS16 |
----------------------------------------
2020-06-14
----------------------------------------
|Order |Warehouse|Stock|Price|Vendor|
|--------------------------------------|
|31437 |WA12 |200 |160 |AS12 |
|31438 |WA11 |26 |12 |AS11 |
|31439 |WA13 |202 |161 |AS16 |
----------------------------------------
I try to parse it using the read_csv function like so:
df = pd.read_csv(file, sep='|', error_bad_lines=False)
But my dataframe comes out with a lot of notable problems:
First problem, the first dashed lines (before the headers) and I also assume the last dashed lines (the footer) get inserted into the Material column and every other field in that row (Warehouse, Stock, Price, Vendor) is set to NaN
Is there a way to skip these dashed lines: ---------------------------------------- ?
Second problem, is the separator | which shows up before and after the headers. Before Order and after Vendor
|Order |Warehouse|Stock|Price|Vendor|
This make pandas return two NaN headers with empty values inside of them
Third problem is how to ignore this line |--------------------------------------| as well?
Sorry if this question sounds confusing. I'm confused as well if this is the proper way to do it by changing the parameters the read_csv function takes or do I need to normalize/sanitize/clean the file before sending it to the read_csv function by doing some pure python?
Thank you
|
67,155,542
|
Pandas replace all but first in consecutive group
|
<p>The problem description is simple, but I cannot figure how to make this work in Pandas. Basically, I'm trying to replace consecutive values (except the first) with some replacement value. For example:</p>
<pre><code>data = {
"A": [0, 1, 1, 1, 0, 0, 0, 0, 2, 2, 2, 2, 3]
}
df = pd.DataFrame.from_dict(data)
A
0 0
1 1
2 1
3 1
4 0
5 0
6 0
7 0
8 2
9 2
10 2
11 2
12 3
</code></pre>
<p>If I run this through some function <code>foo(df, 2, 0)</code> I would get the following:</p>
<pre><code> A
0 0
1 1
2 1
3 1
4 0
5 0
6 0
7 0
8 2
9 0
10 0
11 0
12 3
</code></pre>
<p>Which replaces all values of <code>2</code> with <code>0</code>, except for the first one. Is this possible?</p>
| 67,155,640
| 2021-04-19T03:17:52.277000
| 4
| null | 0
| 158
|
python|pandas
|
<p>You can find all the rows where <code>A = 2</code> and <code>A</code> is also equal to the previous <code>A</code> value and set them to 0:</p>
<pre class="lang-py prettyprint-override"><code>data = {
"A": [0, 1, 1, 1, 0, 0, 0, 0, 2, 2, 2, 2, 3]
}
df = pd.DataFrame.from_dict(data)
df[(df.A == 2) & (df.A == df.A.shift(1))] = 0
</code></pre>
<p>Output:</p>
<pre><code> A
0 0
1 1
2 1
3 1
4 0
5 0
6 0
7 0
8 2
9 0
10 0
11 0
12 3
</code></pre>
<p>If you have more than one column in the dataframe, use <code>df.loc</code> to just set the <code>A</code> values:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[(df.A == 2) & (df.A == df.A.shift(1)), 'A'] = 0
</code></pre>
| 2021-04-19T03:33:24.283000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.replace.html
|
pandas.Series.replace#
pandas.Series.replace#
Series.replace(to_replace=None, value=_NoDefault.no_default, *, inplace=False, limit=None, regex=False, method=_NoDefault.no_default)[source]#
Replace values given in to_replace with value.
Values of the Series are replaced with other values dynamically.
This differs from updating with .loc or .iloc, which require
you to specify a location to update with some value.
Parameters
to_replacestr, regex, list, dict, Series, int, float, or NoneHow to find the values that will be replaced.
numeric, str or regex:
numeric: numeric values equal to to_replace will be
replaced with value
str: string exactly matching to_replace will be replaced
You can find all the rows where A = 2 and A is also equal to the previous A value and set them to 0:
data = {
"A": [0, 1, 1, 1, 0, 0, 0, 0, 2, 2, 2, 2, 3]
}
df = pd.DataFrame.from_dict(data)
df[(df.A == 2) & (df.A == df.A.shift(1))] = 0
Output:
A
0 0
1 1
2 1
3 1
4 0
5 0
6 0
7 0
8 2
9 0
10 0
11 0
12 3
If you have more than one column in the dataframe, use df.loc to just set the A values:
df.loc[(df.A == 2) & (df.A == df.A.shift(1)), 'A'] = 0
with value
regex: regexs matching to_replace will be replaced with
value
list of str, regex, or numeric:
First, if to_replace and value are both lists, they
must be the same length.
Second, if regex=True then all of the strings in both
lists will be interpreted as regexs otherwise they will match
directly. This doesn’t matter much for value since there
are only a few possible substitution regexes you can use.
str, regex and numeric rules apply as above.
dict:
Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and
‘y’ with ‘z’. To use a dict in this way, the optional value
parameter should not be given.
For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’
and the value ‘z’ in column ‘b’ and replaces these values
with whatever is specified in value. The value parameter
should not be None in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}, are read as follows: look in column
‘a’ for the value ‘b’ and replace it with NaN. The optional value
parameter should not be specified to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
dictionary) cannot be regular expressions.
None:
This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also None then
this must be a nested dictionary or Series.
See the examples section for examples of each of these.
valuescalar, dict, list, str, regex, default NoneValue to replace any values matching to_replace with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplacebool, default FalseIf True, performs operation inplace and returns None.
limitint, default NoneMaximum size gap to forward or backward fill.
regexbool or same types as to_replace, default FalseWhether to interpret to_replace and/or value as regular
expressions. If this is True then to_replace must be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
to_replace must be None.
method{‘pad’, ‘ffill’, ‘bfill’}The method to use when for replacement, when to_replace is a
scalar, list or tuple and value is None.
Changed in version 0.23.0: Added to DataFrame.
Returns
SeriesObject after replacement.
Raises
AssertionError
If regex is not a bool and to_replace is not
None.
TypeError
If to_replace is not a scalar, array-like, dict, or None
If to_replace is a dict and value is not a list,
dict, ndarray, or Series
If to_replace is None and regex is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
When replacing multiple bool or datetime64 objects and
the arguments to to_replace does not match the type of the
value being replaced
ValueError
If a list or an ndarray is passed to to_replace and
value but they are not the same length.
See also
Series.fillnaFill NA values.
Series.whereReplace values based on boolean condition.
Series.str.replaceSimple string replacement.
Notes
Regex substitution is performed under the hood with re.sub. The
rules for substitution for re.sub are the same.
Regular expressions will only substitute on strings, meaning you
cannot provide, for example, a regular expression matching floating
point numbers and expect the columns in your frame that have a
numeric dtype to be matched. However, if those floating point
numbers are strings, then you can do this.
This method has a lot of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
When dict is used as the to_replace value, it is like
key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
Scalar `to_replace` and `value`
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
A B C
0 5 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
List-like `to_replace`
>>> df.replace([0, 1, 2, 3], 4)
A B C
0 4 5 a
1 4 6 b
2 4 7 c
3 4 8 d
4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
0 4 5 a
1 3 6 b
2 2 7 c
3 1 8 d
4 4 9 e
>>> s.replace([1, 2], method='bfill')
0 3
1 3
2 3
3 4
4 5
dtype: int64
dict-like `to_replace`
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
1 100 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0 100 100 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
0 100 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 400 9 e
Regular expression `to_replace`
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})
A B
0 new abc
1 xyz new
2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
1 new new
2 bait xyz
Compare the behavior of s.replace({'a': None}) and
s.replace('a', None) to understand the peculiarities
of the to_replace parameter:
>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
When one uses a dict as the to_replace value, it is like the
value(s) in the dict are equal to the value parameter.
s.replace({'a': None}) is equivalent to
s.replace(to_replace={'a': None}, value=None, method=None):
>>> s.replace({'a': None})
0 10
1 None
2 None
3 b
4 None
dtype: object
When value is not explicitly passed and to_replace is a scalar, list
or tuple, replace uses the method parameter (default ‘pad’) to do the
replacement. So this is why the ‘a’ values are being replaced by 10
in rows 1 and 2 and ‘b’ in row 4 in this case.
>>> s.replace('a')
0 10
1 10
2 10
3 b
4 b
dtype: object
On the other hand, if None is explicitly passed for value, it will
be respected:
>>> s.replace('a', None)
0 10
1 None
2 None
3 b
4 None
dtype: object
Changed in version 1.4.0: Previously the explicit None was silently ignored.
| 694
| 1,173
|
Pandas replace all but first in consecutive group
The problem description is simple, but I cannot figure how to make this work in Pandas. Basically, I'm trying to replace consecutive values (except the first) with some replacement value. For example:
data = {
"A": [0, 1, 1, 1, 0, 0, 0, 0, 2, 2, 2, 2, 3]
}
df = pd.DataFrame.from_dict(data)
A
0 0
1 1
2 1
3 1
4 0
5 0
6 0
7 0
8 2
9 2
10 2
11 2
12 3
If I run this through some function foo(df, 2, 0) I would get the following:
A
0 0
1 1
2 1
3 1
4 0
5 0
6 0
7 0
8 2
9 0
10 0
11 0
12 3
Which replaces all values of 2 with 0, except for the first one. Is this possible?
|
64,003,727
|
Create regular time series from irregular interval with python
|
<p>I wonder if is it possible to convert irregular time series interval to regular one without interpolating value from other column like this :</p>
<pre><code>Index count
2018-01-05 00:00:00 1
2018-01-07 00:00:00 4
2018-01-08 00:00:00 15
2018-01-11 00:00:00 2
2018-01-14 00:00:00 5
2018-01-19 00:00:00 5
....
2018-12-26 00:00:00 6
2018-12-29 00:00:00 7
2018-12-30 00:00:00 8
</code></pre>
<p>And I expect the result to be something like this:</p>
<pre><code>Index count
2018-01-01 00:00:00 0
2018-01-02 00:00:00 0
2018-01-03 00:00:00 0
2018-01-04 00:00:00 0
2018-01-05 00:00:00 1
2018-01-06 00:00:00 0
2018-01-07 00:00:00 4
2018-01-08 00:00:00 15
2018-01-09 00:00:00 0
2018-01-10 00:00:00 0
2018-01-11 00:00:00 2
2018-01-12 00:00:00 0
2018-01-13 00:00:00 0
2018-01-14 00:00:00 5
2018-01-15 00:00:00 0
2018-01-16 00:00:00 0
2018-01-17 00:00:00 0
2018-01-18 00:00:00 0
2018-01-19 00:00:00 5
....
2018-12-26 00:00:00 6
2018-12-27 00:00:00 0
2018-12-28 00:00:00 0
2018-12-29 00:00:00 7
2018-12-30 00:00:00 8
2018-12-31 00:00:00 0
</code></pre>
<p>So, far I just try resample from pandas but it only partially solved my problem.</p>
<p>Thanks in advance</p>
| 64,003,740
| 2020-09-22T05:47:00.663000
| 1
| null | 2
| 436
|
python|pandas
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer"><code>date_range</code></a>:</p>
<pre><code>#if necessary
df.index = pd.to_datetime(df.index)
df = df.reindex(pd.date_range('2018-01-01','2018-12-31'), fill_value=0)
print (df)
count
2018-01-01 0
2018-01-02 0
2018-01-03 0
2018-01-04 0
2018-01-05 1
...
2018-12-27 0
2018-12-28 0
2018-12-29 7
2018-12-30 8
2018-12-31 0
[365 rows x 1 columns]
</code></pre>
| 2020-09-22T05:48:51.527000
| 2
|
https://pandas.pydata.org/docs/user_guide/timeseries.html
|
Time series / date functionality#
Time series / date functionality#
pandas contains extensive capabilities and features for working with time series data for all domains.
Using the NumPy datetime64 and timedelta64 dtypes, pandas has consolidated a large number of
features from other Python libraries like scikits.timeseries as well as created
a tremendous amount of new functionality for manipulating time series data.
For example, pandas supports:
Parsing time series information from various sources and formats
In [1]: import datetime
In [2]: dti = pd.to_datetime(
...: ["1/1/2018", np.datetime64("2018-01-01"), datetime.datetime(2018, 1, 1)]
...: )
...:
Use DataFrame.reindex with date_range:
#if necessary
df.index = pd.to_datetime(df.index)
df = df.reindex(pd.date_range('2018-01-01','2018-12-31'), fill_value=0)
print (df)
count
2018-01-01 0
2018-01-02 0
2018-01-03 0
2018-01-04 0
2018-01-05 1
...
2018-12-27 0
2018-12-28 0
2018-12-29 7
2018-12-30 8
2018-12-31 0
[365 rows x 1 columns]
In [3]: dti
Out[3]: DatetimeIndex(['2018-01-01', '2018-01-01', '2018-01-01'], dtype='datetime64[ns]', freq=None)
Generate sequences of fixed-frequency dates and time spans
In [4]: dti = pd.date_range("2018-01-01", periods=3, freq="H")
In [5]: dti
Out[5]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 01:00:00',
'2018-01-01 02:00:00'],
dtype='datetime64[ns]', freq='H')
Manipulating and converting date times with timezone information
In [6]: dti = dti.tz_localize("UTC")
In [7]: dti
Out[7]:
DatetimeIndex(['2018-01-01 00:00:00+00:00', '2018-01-01 01:00:00+00:00',
'2018-01-01 02:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='H')
In [8]: dti.tz_convert("US/Pacific")
Out[8]:
DatetimeIndex(['2017-12-31 16:00:00-08:00', '2017-12-31 17:00:00-08:00',
'2017-12-31 18:00:00-08:00'],
dtype='datetime64[ns, US/Pacific]', freq='H')
Resampling or converting a time series to a particular frequency
In [9]: idx = pd.date_range("2018-01-01", periods=5, freq="H")
In [10]: ts = pd.Series(range(len(idx)), index=idx)
In [11]: ts
Out[11]:
2018-01-01 00:00:00 0
2018-01-01 01:00:00 1
2018-01-01 02:00:00 2
2018-01-01 03:00:00 3
2018-01-01 04:00:00 4
Freq: H, dtype: int64
In [12]: ts.resample("2H").mean()
Out[12]:
2018-01-01 00:00:00 0.5
2018-01-01 02:00:00 2.5
2018-01-01 04:00:00 4.0
Freq: 2H, dtype: float64
Performing date and time arithmetic with absolute or relative time increments
In [13]: friday = pd.Timestamp("2018-01-05")
In [14]: friday.day_name()
Out[14]: 'Friday'
# Add 1 day
In [15]: saturday = friday + pd.Timedelta("1 day")
In [16]: saturday.day_name()
Out[16]: 'Saturday'
# Add 1 business day (Friday --> Monday)
In [17]: monday = friday + pd.offsets.BDay()
In [18]: monday.day_name()
Out[18]: 'Monday'
pandas provides a relatively compact and self-contained set of tools for
performing the above tasks and more.
Overview#
pandas captures 4 general time related concepts:
Date times: A specific date and time with timezone support. Similar to datetime.datetime from the standard library.
Time deltas: An absolute time duration. Similar to datetime.timedelta from the standard library.
Time spans: A span of time defined by a point in time and its associated frequency.
Date offsets: A relative time duration that respects calendar arithmetic. Similar to dateutil.relativedelta.relativedelta from the dateutil package.
Concept
Scalar Class
Array Class
pandas Data Type
Primary Creation Method
Date times
Timestamp
DatetimeIndex
datetime64[ns] or datetime64[ns, tz]
to_datetime or date_range
Time deltas
Timedelta
TimedeltaIndex
timedelta64[ns]
to_timedelta or timedelta_range
Time spans
Period
PeriodIndex
period[freq]
Period or period_range
Date offsets
DateOffset
None
None
DateOffset
For time series data, it’s conventional to represent the time component in the index of a Series or DataFrame
so manipulations can be performed with respect to the time element.
In [19]: pd.Series(range(3), index=pd.date_range("2000", freq="D", periods=3))
Out[19]:
2000-01-01 0
2000-01-02 1
2000-01-03 2
Freq: D, dtype: int64
However, Series and DataFrame can directly also support the time component as data itself.
In [20]: pd.Series(pd.date_range("2000", freq="D", periods=3))
Out[20]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
Series and DataFrame have extended data type support and functionality for datetime, timedelta
and Period data when passed into those constructors. DateOffset
data however will be stored as object data.
In [21]: pd.Series(pd.period_range("1/1/2011", freq="M", periods=3))
Out[21]:
0 2011-01
1 2011-02
2 2011-03
dtype: period[M]
In [22]: pd.Series([pd.DateOffset(1), pd.DateOffset(2)])
Out[22]:
0 <DateOffset>
1 <2 * DateOffsets>
dtype: object
In [23]: pd.Series(pd.date_range("1/1/2011", freq="M", periods=3))
Out[23]:
0 2011-01-31
1 2011-02-28
2 2011-03-31
dtype: datetime64[ns]
Lastly, pandas represents null date times, time deltas, and time spans as NaT which
is useful for representing missing or null date like values and behaves similar
as np.nan does for float data.
In [24]: pd.Timestamp(pd.NaT)
Out[24]: NaT
In [25]: pd.Timedelta(pd.NaT)
Out[25]: NaT
In [26]: pd.Period(pd.NaT)
Out[26]: NaT
# Equality acts as np.nan would
In [27]: pd.NaT == pd.NaT
Out[27]: False
Timestamps vs. time spans#
Timestamped data is the most basic type of time series data that associates
values with points in time. For pandas objects it means using the points in
time.
In [28]: pd.Timestamp(datetime.datetime(2012, 5, 1))
Out[28]: Timestamp('2012-05-01 00:00:00')
In [29]: pd.Timestamp("2012-05-01")
Out[29]: Timestamp('2012-05-01 00:00:00')
In [30]: pd.Timestamp(2012, 5, 1)
Out[30]: Timestamp('2012-05-01 00:00:00')
However, in many cases it is more natural to associate things like change
variables with a time span instead. The span represented by Period can be
specified explicitly, or inferred from datetime string format.
For example:
In [31]: pd.Period("2011-01")
Out[31]: Period('2011-01', 'M')
In [32]: pd.Period("2012-05", freq="D")
Out[32]: Period('2012-05-01', 'D')
Timestamp and Period can serve as an index. Lists of
Timestamp and Period are automatically coerced to DatetimeIndex
and PeriodIndex respectively.
In [33]: dates = [
....: pd.Timestamp("2012-05-01"),
....: pd.Timestamp("2012-05-02"),
....: pd.Timestamp("2012-05-03"),
....: ]
....:
In [34]: ts = pd.Series(np.random.randn(3), dates)
In [35]: type(ts.index)
Out[35]: pandas.core.indexes.datetimes.DatetimeIndex
In [36]: ts.index
Out[36]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In [37]: ts
Out[37]:
2012-05-01 0.469112
2012-05-02 -0.282863
2012-05-03 -1.509059
dtype: float64
In [38]: periods = [pd.Period("2012-01"), pd.Period("2012-02"), pd.Period("2012-03")]
In [39]: ts = pd.Series(np.random.randn(3), periods)
In [40]: type(ts.index)
Out[40]: pandas.core.indexes.period.PeriodIndex
In [41]: ts.index
Out[41]: PeriodIndex(['2012-01', '2012-02', '2012-03'], dtype='period[M]')
In [42]: ts
Out[42]:
2012-01 -1.135632
2012-02 1.212112
2012-03 -0.173215
Freq: M, dtype: float64
pandas allows you to capture both representations and
convert between them. Under the hood, pandas represents timestamps using
instances of Timestamp and sequences of timestamps using instances of
DatetimeIndex. For regular time spans, pandas uses Period objects for
scalar values and PeriodIndex for sequences of spans. Better support for
irregular intervals with arbitrary start and end points are forth-coming in
future releases.
Converting to timestamps#
To convert a Series or list-like object of date-like objects e.g. strings,
epochs, or a mixture, you can use the to_datetime function. When passed
a Series, this returns a Series (with the same index), while a list-like
is converted to a DatetimeIndex:
In [43]: pd.to_datetime(pd.Series(["Jul 31, 2009", "2010-01-10", None]))
Out[43]:
0 2009-07-31
1 2010-01-10
2 NaT
dtype: datetime64[ns]
In [44]: pd.to_datetime(["2005/11/23", "2010.12.31"])
Out[44]: DatetimeIndex(['2005-11-23', '2010-12-31'], dtype='datetime64[ns]', freq=None)
If you use dates which start with the day first (i.e. European style),
you can pass the dayfirst flag:
In [45]: pd.to_datetime(["04-01-2012 10:00"], dayfirst=True)
Out[45]: DatetimeIndex(['2012-01-04 10:00:00'], dtype='datetime64[ns]', freq=None)
In [46]: pd.to_datetime(["14-01-2012", "01-14-2012"], dayfirst=True)
Out[46]: DatetimeIndex(['2012-01-14', '2012-01-14'], dtype='datetime64[ns]', freq=None)
Warning
You see in the above example that dayfirst isn’t strict. If a date
can’t be parsed with the day being first it will be parsed as if
dayfirst were False, and in the case of parsing delimited date strings
(e.g. 31-12-2012) then a warning will also be raised.
If you pass a single string to to_datetime, it returns a single Timestamp.
Timestamp can also accept string input, but it doesn’t accept string parsing
options like dayfirst or format, so use to_datetime if these are required.
In [47]: pd.to_datetime("2010/11/12")
Out[47]: Timestamp('2010-11-12 00:00:00')
In [48]: pd.Timestamp("2010/11/12")
Out[48]: Timestamp('2010-11-12 00:00:00')
You can also use the DatetimeIndex constructor directly:
In [49]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"])
Out[49]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq=None)
The string ‘infer’ can be passed in order to set the frequency of the index as the
inferred frequency upon creation:
In [50]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"], freq="infer")
Out[50]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq='2D')
Providing a format argument#
In addition to the required datetime string, a format argument can be passed to ensure specific parsing.
This could also potentially speed up the conversion considerably.
In [51]: pd.to_datetime("2010/11/12", format="%Y/%m/%d")
Out[51]: Timestamp('2010-11-12 00:00:00')
In [52]: pd.to_datetime("12-11-2010 00:00", format="%d-%m-%Y %H:%M")
Out[52]: Timestamp('2010-11-12 00:00:00')
For more information on the choices available when specifying the format
option, see the Python datetime documentation.
Assembling datetime from multiple DataFrame columns#
You can also pass a DataFrame of integer or string columns to assemble into a Series of Timestamps.
In [53]: df = pd.DataFrame(
....: {"year": [2015, 2016], "month": [2, 3], "day": [4, 5], "hour": [2, 3]}
....: )
....:
In [54]: pd.to_datetime(df)
Out[54]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
In [55]: pd.to_datetime(df[["year", "month", "day"]])
Out[55]:
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]
pd.to_datetime looks for standard designations of the datetime component in the column names, including:
required: year, month, day
optional: hour, minute, second, millisecond, microsecond, nanosecond
Invalid data#
The default behavior, errors='raise', is to raise when unparsable:
In [2]: pd.to_datetime(['2009/07/31', 'asd'], errors='raise')
ValueError: Unknown string format
Pass errors='ignore' to return the original input when unparsable:
In [56]: pd.to_datetime(["2009/07/31", "asd"], errors="ignore")
Out[56]: Index(['2009/07/31', 'asd'], dtype='object')
Pass errors='coerce' to convert unparsable data to NaT (not a time):
In [57]: pd.to_datetime(["2009/07/31", "asd"], errors="coerce")
Out[57]: DatetimeIndex(['2009-07-31', 'NaT'], dtype='datetime64[ns]', freq=None)
Epoch timestamps#
pandas supports converting integer or float epoch times to Timestamp and
DatetimeIndex. The default unit is nanoseconds, since that is how Timestamp
objects are stored internally. However, epochs are often stored in another unit
which can be specified. These are computed from the starting point specified by the
origin parameter.
In [58]: pd.to_datetime(
....: [1349720105, 1349806505, 1349892905, 1349979305, 1350065705], unit="s"
....: )
....:
Out[58]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05',
'2012-10-12 18:15:05'],
dtype='datetime64[ns]', freq=None)
In [59]: pd.to_datetime(
....: [1349720105100, 1349720105200, 1349720105300, 1349720105400, 1349720105500],
....: unit="ms",
....: )
....:
Out[59]:
DatetimeIndex(['2012-10-08 18:15:05.100000', '2012-10-08 18:15:05.200000',
'2012-10-08 18:15:05.300000', '2012-10-08 18:15:05.400000',
'2012-10-08 18:15:05.500000'],
dtype='datetime64[ns]', freq=None)
Note
The unit parameter does not use the same strings as the format parameter
that was discussed above). The
available units are listed on the documentation for pandas.to_datetime().
Changed in version 1.0.0.
Constructing a Timestamp or DatetimeIndex with an epoch timestamp
with the tz argument specified will raise a ValueError. If you have
epochs in wall time in another timezone, you can read the epochs
as timezone-naive timestamps and then localize to the appropriate timezone:
In [60]: pd.Timestamp(1262347200000000000).tz_localize("US/Pacific")
Out[60]: Timestamp('2010-01-01 12:00:00-0800', tz='US/Pacific')
In [61]: pd.DatetimeIndex([1262347200000000000]).tz_localize("US/Pacific")
Out[61]: DatetimeIndex(['2010-01-01 12:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None)
Note
Epoch times will be rounded to the nearest nanosecond.
Warning
Conversion of float epoch times can lead to inaccurate and unexpected results.
Python floats have about 15 digits precision in
decimal. Rounding during conversion from float to high precision Timestamp is
unavoidable. The only way to achieve exact precision is to use a fixed-width
types (e.g. an int64).
In [62]: pd.to_datetime([1490195805.433, 1490195805.433502912], unit="s")
Out[62]: DatetimeIndex(['2017-03-22 15:16:45.433000088', '2017-03-22 15:16:45.433502913'], dtype='datetime64[ns]', freq=None)
In [63]: pd.to_datetime(1490195805433502912, unit="ns")
Out[63]: Timestamp('2017-03-22 15:16:45.433502912')
See also
Using the origin parameter
From timestamps to epoch#
To invert the operation from above, namely, to convert from a Timestamp to a ‘unix’ epoch:
In [64]: stamps = pd.date_range("2012-10-08 18:15:05", periods=4, freq="D")
In [65]: stamps
Out[65]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05'],
dtype='datetime64[ns]', freq='D')
We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by the
“unit” (1 second).
In [66]: (stamps - pd.Timestamp("1970-01-01")) // pd.Timedelta("1s")
Out[66]: Int64Index([1349720105, 1349806505, 1349892905, 1349979305], dtype='int64')
Using the origin parameter#
Using the origin parameter, one can specify an alternative starting point for creation
of a DatetimeIndex. For example, to use 1960-01-01 as the starting date:
In [67]: pd.to_datetime([1, 2, 3], unit="D", origin=pd.Timestamp("1960-01-01"))
Out[67]: DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00.
Commonly called ‘unix epoch’ or POSIX time.
In [68]: pd.to_datetime([1, 2, 3], unit="D")
Out[68]: DatetimeIndex(['1970-01-02', '1970-01-03', '1970-01-04'], dtype='datetime64[ns]', freq=None)
Generating ranges of timestamps#
To generate an index with timestamps, you can use either the DatetimeIndex or
Index constructor and pass in a list of datetime objects:
In [69]: dates = [
....: datetime.datetime(2012, 5, 1),
....: datetime.datetime(2012, 5, 2),
....: datetime.datetime(2012, 5, 3),
....: ]
....:
# Note the frequency information
In [70]: index = pd.DatetimeIndex(dates)
In [71]: index
Out[71]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
# Automatically converted to DatetimeIndex
In [72]: index = pd.Index(dates)
In [73]: index
Out[73]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In practice this becomes very cumbersome because we often need a very long
index with a large number of timestamps. If we need timestamps on a regular
frequency, we can use the date_range() and bdate_range() functions
to create a DatetimeIndex. The default frequency for date_range is a
calendar day while the default for bdate_range is a business day:
In [74]: start = datetime.datetime(2011, 1, 1)
In [75]: end = datetime.datetime(2012, 1, 1)
In [76]: index = pd.date_range(start, end)
In [77]: index
Out[77]:
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04',
'2011-01-05', '2011-01-06', '2011-01-07', '2011-01-08',
'2011-01-09', '2011-01-10',
...
'2011-12-23', '2011-12-24', '2011-12-25', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30',
'2011-12-31', '2012-01-01'],
dtype='datetime64[ns]', length=366, freq='D')
In [78]: index = pd.bdate_range(start, end)
In [79]: index
Out[79]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14',
...
'2011-12-19', '2011-12-20', '2011-12-21', '2011-12-22',
'2011-12-23', '2011-12-26', '2011-12-27', '2011-12-28',
'2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', length=260, freq='B')
Convenience functions like date_range and bdate_range can utilize a
variety of frequency aliases:
In [80]: pd.date_range(start, periods=1000, freq="M")
Out[80]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-30',
'2011-05-31', '2011-06-30', '2011-07-31', '2011-08-31',
'2011-09-30', '2011-10-31',
...
'2093-07-31', '2093-08-31', '2093-09-30', '2093-10-31',
'2093-11-30', '2093-12-31', '2094-01-31', '2094-02-28',
'2094-03-31', '2094-04-30'],
dtype='datetime64[ns]', length=1000, freq='M')
In [81]: pd.bdate_range(start, periods=250, freq="BQS")
Out[81]:
DatetimeIndex(['2011-01-03', '2011-04-01', '2011-07-01', '2011-10-03',
'2012-01-02', '2012-04-02', '2012-07-02', '2012-10-01',
'2013-01-01', '2013-04-01',
...
'2071-01-01', '2071-04-01', '2071-07-01', '2071-10-01',
'2072-01-01', '2072-04-01', '2072-07-01', '2072-10-03',
'2073-01-02', '2073-04-03'],
dtype='datetime64[ns]', length=250, freq='BQS-JAN')
date_range and bdate_range make it easy to generate a range of dates
using various combinations of parameters like start, end, periods,
and freq. The start and end dates are strictly inclusive, so dates outside
of those specified will not be generated:
In [82]: pd.date_range(start, end, freq="BM")
Out[82]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [83]: pd.date_range(start, end, freq="W")
Out[83]:
DatetimeIndex(['2011-01-02', '2011-01-09', '2011-01-16', '2011-01-23',
'2011-01-30', '2011-02-06', '2011-02-13', '2011-02-20',
'2011-02-27', '2011-03-06', '2011-03-13', '2011-03-20',
'2011-03-27', '2011-04-03', '2011-04-10', '2011-04-17',
'2011-04-24', '2011-05-01', '2011-05-08', '2011-05-15',
'2011-05-22', '2011-05-29', '2011-06-05', '2011-06-12',
'2011-06-19', '2011-06-26', '2011-07-03', '2011-07-10',
'2011-07-17', '2011-07-24', '2011-07-31', '2011-08-07',
'2011-08-14', '2011-08-21', '2011-08-28', '2011-09-04',
'2011-09-11', '2011-09-18', '2011-09-25', '2011-10-02',
'2011-10-09', '2011-10-16', '2011-10-23', '2011-10-30',
'2011-11-06', '2011-11-13', '2011-11-20', '2011-11-27',
'2011-12-04', '2011-12-11', '2011-12-18', '2011-12-25',
'2012-01-01'],
dtype='datetime64[ns]', freq='W-SUN')
In [84]: pd.bdate_range(end=end, periods=20)
Out[84]:
DatetimeIndex(['2011-12-05', '2011-12-06', '2011-12-07', '2011-12-08',
'2011-12-09', '2011-12-12', '2011-12-13', '2011-12-14',
'2011-12-15', '2011-12-16', '2011-12-19', '2011-12-20',
'2011-12-21', '2011-12-22', '2011-12-23', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', freq='B')
In [85]: pd.bdate_range(start=start, periods=20)
Out[85]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14', '2011-01-17', '2011-01-18',
'2011-01-19', '2011-01-20', '2011-01-21', '2011-01-24',
'2011-01-25', '2011-01-26', '2011-01-27', '2011-01-28'],
dtype='datetime64[ns]', freq='B')
Specifying start, end, and periods will generate a range of evenly spaced
dates from start to end inclusively, with periods number of elements in the
resulting DatetimeIndex:
In [86]: pd.date_range("2018-01-01", "2018-01-05", periods=5)
Out[86]:
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05'],
dtype='datetime64[ns]', freq=None)
In [87]: pd.date_range("2018-01-01", "2018-01-05", periods=10)
Out[87]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 10:40:00',
'2018-01-01 21:20:00', '2018-01-02 08:00:00',
'2018-01-02 18:40:00', '2018-01-03 05:20:00',
'2018-01-03 16:00:00', '2018-01-04 02:40:00',
'2018-01-04 13:20:00', '2018-01-05 00:00:00'],
dtype='datetime64[ns]', freq=None)
Custom frequency ranges#
bdate_range can also generate a range of custom frequency dates by using
the weekmask and holidays parameters. These parameters will only be
used if a custom frequency string is passed.
In [88]: weekmask = "Mon Wed Fri"
In [89]: holidays = [datetime.datetime(2011, 1, 5), datetime.datetime(2011, 3, 14)]
In [90]: pd.bdate_range(start, end, freq="C", weekmask=weekmask, holidays=holidays)
Out[90]:
DatetimeIndex(['2011-01-03', '2011-01-07', '2011-01-10', '2011-01-12',
'2011-01-14', '2011-01-17', '2011-01-19', '2011-01-21',
'2011-01-24', '2011-01-26',
...
'2011-12-09', '2011-12-12', '2011-12-14', '2011-12-16',
'2011-12-19', '2011-12-21', '2011-12-23', '2011-12-26',
'2011-12-28', '2011-12-30'],
dtype='datetime64[ns]', length=154, freq='C')
In [91]: pd.bdate_range(start, end, freq="CBMS", weekmask=weekmask)
Out[91]:
DatetimeIndex(['2011-01-03', '2011-02-02', '2011-03-02', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-02', '2011-10-03', '2011-11-02', '2011-12-02'],
dtype='datetime64[ns]', freq='CBMS')
See also
Custom business days
Timestamp limitations#
Since pandas represents timestamps in nanosecond resolution, the time span that
can be represented using a 64-bit integer is limited to approximately 584 years:
In [92]: pd.Timestamp.min
Out[92]: Timestamp('1677-09-21 00:12:43.145224193')
In [93]: pd.Timestamp.max
Out[93]: Timestamp('2262-04-11 23:47:16.854775807')
See also
Representing out-of-bounds spans
Indexing#
One of the main uses for DatetimeIndex is as an index for pandas objects.
The DatetimeIndex class contains many time series related optimizations:
A large range of dates for various offsets are pre-computed and cached
under the hood in order to make generating subsequent date ranges very fast
(just have to grab a slice).
Fast shifting using the shift method on pandas objects.
Unioning of overlapping DatetimeIndex objects with the same frequency is
very fast (important for fast data alignment).
Quick access to date fields via properties such as year, month, etc.
Regularization functions like snap and very fast asof logic.
DatetimeIndex objects have all the basic functionality of regular Index
objects, and a smorgasbord of advanced time series specific methods for easy
frequency processing.
See also
Reindexing methods
Note
While pandas does not force you to have a sorted date index, some of these
methods may have unexpected or incorrect behavior if the dates are unsorted.
DatetimeIndex can be used like a regular index and offers all of its
intelligent functionality like selection, slicing, etc.
In [94]: rng = pd.date_range(start, end, freq="BM")
In [95]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [96]: ts.index
Out[96]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [97]: ts[:5].index
Out[97]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31'],
dtype='datetime64[ns]', freq='BM')
In [98]: ts[::2].index
Out[98]:
DatetimeIndex(['2011-01-31', '2011-03-31', '2011-05-31', '2011-07-29',
'2011-09-30', '2011-11-30'],
dtype='datetime64[ns]', freq='2BM')
Partial string indexing#
Dates and strings that parse to timestamps can be passed as indexing parameters:
In [99]: ts["1/31/2011"]
Out[99]: 0.11920871129693428
In [100]: ts[datetime.datetime(2011, 12, 25):]
Out[100]:
2011-12-30 0.56702
Freq: BM, dtype: float64
In [101]: ts["10/31/2011":"12/31/2011"]
Out[101]:
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
To provide convenience for accessing longer time series, you can also pass in
the year or year and month as strings:
In [102]: ts["2011"]
Out[102]:
2011-01-31 0.119209
2011-02-28 -1.044236
2011-03-31 -0.861849
2011-04-29 -2.104569
2011-05-31 -0.494929
2011-06-30 1.071804
2011-07-29 0.721555
2011-08-31 -0.706771
2011-09-30 -1.039575
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
In [103]: ts["2011-6"]
Out[103]:
2011-06-30 1.071804
Freq: BM, dtype: float64
This type of slicing will work on a DataFrame with a DatetimeIndex as well. Since the
partial string selection is a form of label slicing, the endpoints will be included. This
would include matching times on an included date:
Warning
Indexing DataFrame rows with a single string with getitem (e.g. frame[dtstring])
is deprecated starting with pandas 1.2.0 (given the ambiguity whether it is indexing
the rows or selecting a column) and will be removed in a future version. The equivalent
with .loc (e.g. frame.loc[dtstring]) is still supported.
In [104]: dft = pd.DataFrame(
.....: np.random.randn(100000, 1),
.....: columns=["A"],
.....: index=pd.date_range("20130101", periods=100000, freq="T"),
.....: )
.....:
In [105]: dft
Out[105]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
In [106]: dft.loc["2013"]
Out[106]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
This starts on the very first time in the month, and includes the last date and
time for the month:
In [107]: dft["2013-1":"2013-2"]
Out[107]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies a stop time that includes all of the times on the last day:
In [108]: dft["2013-1":"2013-2-28"]
Out[108]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies an exact stop time (and is not the same as the above):
In [109]: dft["2013-1":"2013-2-28 00:00:00"]
Out[109]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
We are stopping on the included end-point as it is part of the index:
In [110]: dft["2013-1-15":"2013-1-15 12:30:00"]
Out[110]:
A
2013-01-15 00:00:00 -0.984810
2013-01-15 00:01:00 0.941451
2013-01-15 00:02:00 1.559365
2013-01-15 00:03:00 1.034374
2013-01-15 00:04:00 -1.480656
... ...
2013-01-15 12:26:00 0.371454
2013-01-15 12:27:00 -0.930806
2013-01-15 12:28:00 -0.069177
2013-01-15 12:29:00 0.066510
2013-01-15 12:30:00 -0.003945
[751 rows x 1 columns]
DatetimeIndex partial string indexing also works on a DataFrame with a MultiIndex:
In [111]: dft2 = pd.DataFrame(
.....: np.random.randn(20, 1),
.....: columns=["A"],
.....: index=pd.MultiIndex.from_product(
.....: [pd.date_range("20130101", periods=10, freq="12H"), ["a", "b"]]
.....: ),
.....: )
.....:
In [112]: dft2
Out[112]:
A
2013-01-01 00:00:00 a -0.298694
b 0.823553
2013-01-01 12:00:00 a 0.943285
b -1.479399
2013-01-02 00:00:00 a -1.643342
... ...
2013-01-04 12:00:00 b 0.069036
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
[20 rows x 1 columns]
In [113]: dft2.loc["2013-01-05"]
Out[113]:
A
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
In [114]: idx = pd.IndexSlice
In [115]: dft2 = dft2.swaplevel(0, 1).sort_index()
In [116]: dft2.loc[idx[:, "2013-01-05"], :]
Out[116]:
A
a 2013-01-05 00:00:00 0.122297
2013-01-05 12:00:00 0.370079
b 2013-01-05 00:00:00 1.422060
2013-01-05 12:00:00 1.016331
New in version 0.25.0.
Slicing with string indexing also honors UTC offset.
In [117]: df = pd.DataFrame([0], index=pd.DatetimeIndex(["2019-01-01"], tz="US/Pacific"))
In [118]: df
Out[118]:
0
2019-01-01 00:00:00-08:00 0
In [119]: df["2019-01-01 12:00:00+04:00":"2019-01-01 13:00:00+04:00"]
Out[119]:
0
2019-01-01 00:00:00-08:00 0
Slice vs. exact match#
The same string used as an indexing parameter can be treated either as a slice or as an exact match depending on the resolution of the index. If the string is less accurate than the index, it will be treated as a slice, otherwise as an exact match.
Consider a Series object with a minute resolution index:
In [120]: series_minute = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:00", "2012-01-01 00:00:00", "2012-01-01 00:02:00"]
.....: ),
.....: )
.....:
In [121]: series_minute.index.resolution
Out[121]: 'minute'
A timestamp string less accurate than a minute gives a Series object.
In [122]: series_minute["2011-12-31 23"]
Out[122]:
2011-12-31 23:59:00 1
dtype: int64
A timestamp string with minute resolution (or more accurate), gives a scalar instead, i.e. it is not casted to a slice.
In [123]: series_minute["2011-12-31 23:59"]
Out[123]: 1
In [124]: series_minute["2011-12-31 23:59:00"]
Out[124]: 1
If index resolution is second, then the minute-accurate timestamp gives a
Series.
In [125]: series_second = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:59", "2012-01-01 00:00:00", "2012-01-01 00:00:01"]
.....: ),
.....: )
.....:
In [126]: series_second.index.resolution
Out[126]: 'second'
In [127]: series_second["2011-12-31 23:59"]
Out[127]:
2011-12-31 23:59:59 1
dtype: int64
If the timestamp string is treated as a slice, it can be used to index DataFrame with .loc[] as well.
In [128]: dft_minute = pd.DataFrame(
.....: {"a": [1, 2, 3], "b": [4, 5, 6]}, index=series_minute.index
.....: )
.....:
In [129]: dft_minute.loc["2011-12-31 23"]
Out[129]:
a b
2011-12-31 23:59:00 1 4
Warning
However, if the string is treated as an exact match, the selection in DataFrame’s [] will be column-wise and not row-wise, see Indexing Basics. For example dft_minute['2011-12-31 23:59'] will raise KeyError as '2012-12-31 23:59' has the same resolution as the index and there is no column with such name:
To always have unambiguous selection, whether the row is treated as a slice or a single selection, use .loc.
In [130]: dft_minute.loc["2011-12-31 23:59"]
Out[130]:
a 1
b 4
Name: 2011-12-31 23:59:00, dtype: int64
Note also that DatetimeIndex resolution cannot be less precise than day.
In [131]: series_monthly = pd.Series(
.....: [1, 2, 3], pd.DatetimeIndex(["2011-12", "2012-01", "2012-02"])
.....: )
.....:
In [132]: series_monthly.index.resolution
Out[132]: 'day'
In [133]: series_monthly["2011-12"] # returns Series
Out[133]:
2011-12-01 1
dtype: int64
Exact indexing#
As discussed in previous section, indexing a DatetimeIndex with a partial string depends on the “accuracy” of the period, in other words how specific the interval is in relation to the resolution of the index. In contrast, indexing with Timestamp or datetime objects is exact, because the objects have exact meaning. These also follow the semantics of including both endpoints.
These Timestamp and datetime objects have exact hours, minutes, and seconds, even though they were not explicitly specified (they are 0).
In [134]: dft[datetime.datetime(2013, 1, 1): datetime.datetime(2013, 2, 28)]
Out[134]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
With no defaults.
In [135]: dft[
.....: datetime.datetime(2013, 1, 1, 10, 12, 0): datetime.datetime(
.....: 2013, 2, 28, 10, 12, 0
.....: )
.....: ]
.....:
Out[135]:
A
2013-01-01 10:12:00 0.565375
2013-01-01 10:13:00 0.068184
2013-01-01 10:14:00 0.788871
2013-01-01 10:15:00 -0.280343
2013-01-01 10:16:00 0.931536
... ...
2013-02-28 10:08:00 0.148098
2013-02-28 10:09:00 -0.388138
2013-02-28 10:10:00 0.139348
2013-02-28 10:11:00 0.085288
2013-02-28 10:12:00 0.950146
[83521 rows x 1 columns]
Truncating & fancy indexing#
A truncate() convenience function is provided that is similar
to slicing. Note that truncate assumes a 0 value for any unspecified date
component in a DatetimeIndex in contrast to slicing which returns any
partially matching dates:
In [136]: rng2 = pd.date_range("2011-01-01", "2012-01-01", freq="W")
In [137]: ts2 = pd.Series(np.random.randn(len(rng2)), index=rng2)
In [138]: ts2.truncate(before="2011-11", after="2011-12")
Out[138]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
Freq: W-SUN, dtype: float64
In [139]: ts2["2011-11":"2011-12"]
Out[139]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
2011-12-04 0.046611
2011-12-11 0.059478
2011-12-18 -0.286539
2011-12-25 0.841669
Freq: W-SUN, dtype: float64
Even complicated fancy indexing that breaks the DatetimeIndex frequency
regularity will result in a DatetimeIndex, although frequency is lost:
In [140]: ts2[[0, 2, 6]].index
Out[140]: DatetimeIndex(['2011-01-02', '2011-01-16', '2011-02-13'], dtype='datetime64[ns]', freq=None)
Time/date components#
There are several time/date properties that one can access from Timestamp or a collection of timestamps like a DatetimeIndex.
Property
Description
year
The year of the datetime
month
The month of the datetime
day
The days of the datetime
hour
The hour of the datetime
minute
The minutes of the datetime
second
The seconds of the datetime
microsecond
The microseconds of the datetime
nanosecond
The nanoseconds of the datetime
date
Returns datetime.date (does not contain timezone information)
time
Returns datetime.time (does not contain timezone information)
timetz
Returns datetime.time as local time with timezone information
dayofyear
The ordinal day of year
day_of_year
The ordinal day of year
weekofyear
The week ordinal of the year
week
The week ordinal of the year
dayofweek
The number of the day of the week with Monday=0, Sunday=6
day_of_week
The number of the day of the week with Monday=0, Sunday=6
weekday
The number of the day of the week with Monday=0, Sunday=6
quarter
Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc.
days_in_month
The number of days in the month of the datetime
is_month_start
Logical indicating if first day of month (defined by frequency)
is_month_end
Logical indicating if last day of month (defined by frequency)
is_quarter_start
Logical indicating if first day of quarter (defined by frequency)
is_quarter_end
Logical indicating if last day of quarter (defined by frequency)
is_year_start
Logical indicating if first day of year (defined by frequency)
is_year_end
Logical indicating if last day of year (defined by frequency)
is_leap_year
Logical indicating if the date belongs to a leap year
Furthermore, if you have a Series with datetimelike values, then you can
access these properties via the .dt accessor, as detailed in the section
on .dt accessors.
New in version 1.1.0.
You may obtain the year, week and day components of the ISO year from the ISO 8601 standard:
In [141]: idx = pd.date_range(start="2019-12-29", freq="D", periods=4)
In [142]: idx.isocalendar()
Out[142]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
In [143]: idx.to_series().dt.isocalendar()
Out[143]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
DateOffset objects#
In the preceding examples, frequency strings (e.g. 'D') were used to specify
a frequency that defined:
how the date times in DatetimeIndex were spaced when using date_range()
the frequency of a Period or PeriodIndex
These frequency strings map to a DateOffset object and its subclasses. A DateOffset
is similar to a Timedelta that represents a duration of time but follows specific calendar duration rules.
For example, a Timedelta day will always increment datetimes by 24 hours, while a DateOffset day
will increment datetimes to the same time the next day whether a day represents 23, 24 or 25 hours due to daylight
savings time. However, all DateOffset subclasses that are an hour or smaller
(Hour, Minute, Second, Milli, Micro, Nano) behave like
Timedelta and respect absolute time.
The basic DateOffset acts similar to dateutil.relativedelta (relativedelta documentation)
that shifts a date time by the corresponding calendar duration specified. The
arithmetic operator (+) can be used to perform the shift.
# This particular day contains a day light savings time transition
In [144]: ts = pd.Timestamp("2016-10-30 00:00:00", tz="Europe/Helsinki")
# Respects absolute time
In [145]: ts + pd.Timedelta(days=1)
Out[145]: Timestamp('2016-10-30 23:00:00+0200', tz='Europe/Helsinki')
# Respects calendar time
In [146]: ts + pd.DateOffset(days=1)
Out[146]: Timestamp('2016-10-31 00:00:00+0200', tz='Europe/Helsinki')
In [147]: friday = pd.Timestamp("2018-01-05")
In [148]: friday.day_name()
Out[148]: 'Friday'
# Add 2 business days (Friday --> Tuesday)
In [149]: two_business_days = 2 * pd.offsets.BDay()
In [150]: friday + two_business_days
Out[150]: Timestamp('2018-01-09 00:00:00')
In [151]: (friday + two_business_days).day_name()
Out[151]: 'Tuesday'
Most DateOffsets have associated frequencies strings, or offset aliases, that can be passed
into freq keyword arguments. The available date offsets and associated frequency strings can be found below:
Date Offset
Frequency String
Description
DateOffset
None
Generic offset class, defaults to absolute 24 hours
BDay or BusinessDay
'B'
business day (weekday)
CDay or CustomBusinessDay
'C'
custom business day
Week
'W'
one week, optionally anchored on a day of the week
WeekOfMonth
'WOM'
the x-th day of the y-th week of each month
LastWeekOfMonth
'LWOM'
the x-th day of the last week of each month
MonthEnd
'M'
calendar month end
MonthBegin
'MS'
calendar month begin
BMonthEnd or BusinessMonthEnd
'BM'
business month end
BMonthBegin or BusinessMonthBegin
'BMS'
business month begin
CBMonthEnd or CustomBusinessMonthEnd
'CBM'
custom business month end
CBMonthBegin or CustomBusinessMonthBegin
'CBMS'
custom business month begin
SemiMonthEnd
'SM'
15th (or other day_of_month) and calendar month end
SemiMonthBegin
'SMS'
15th (or other day_of_month) and calendar month begin
QuarterEnd
'Q'
calendar quarter end
QuarterBegin
'QS'
calendar quarter begin
BQuarterEnd
'BQ
business quarter end
BQuarterBegin
'BQS'
business quarter begin
FY5253Quarter
'REQ'
retail (aka 52-53 week) quarter
YearEnd
'A'
calendar year end
YearBegin
'AS' or 'BYS'
calendar year begin
BYearEnd
'BA'
business year end
BYearBegin
'BAS'
business year begin
FY5253
'RE'
retail (aka 52-53 week) year
Easter
None
Easter holiday
BusinessHour
'BH'
business hour
CustomBusinessHour
'CBH'
custom business hour
Day
'D'
one absolute day
Hour
'H'
one hour
Minute
'T' or 'min'
one minute
Second
'S'
one second
Milli
'L' or 'ms'
one millisecond
Micro
'U' or 'us'
one microsecond
Nano
'N'
one nanosecond
DateOffsets additionally have rollforward() and rollback()
methods for moving a date forward or backward respectively to a valid offset
date relative to the offset. For example, business offsets will roll dates
that land on the weekends (Saturday and Sunday) forward to Monday since
business offsets operate on the weekdays.
In [152]: ts = pd.Timestamp("2018-01-06 00:00:00")
In [153]: ts.day_name()
Out[153]: 'Saturday'
# BusinessHour's valid offset dates are Monday through Friday
In [154]: offset = pd.offsets.BusinessHour(start="09:00")
# Bring the date to the closest offset date (Monday)
In [155]: offset.rollforward(ts)
Out[155]: Timestamp('2018-01-08 09:00:00')
# Date is brought to the closest offset date first and then the hour is added
In [156]: ts + offset
Out[156]: Timestamp('2018-01-08 10:00:00')
These operations preserve time (hour, minute, etc) information by default.
To reset time to midnight, use normalize() before or after applying
the operation (depending on whether you want the time information included
in the operation).
In [157]: ts = pd.Timestamp("2014-01-01 09:00")
In [158]: day = pd.offsets.Day()
In [159]: day + ts
Out[159]: Timestamp('2014-01-02 09:00:00')
In [160]: (day + ts).normalize()
Out[160]: Timestamp('2014-01-02 00:00:00')
In [161]: ts = pd.Timestamp("2014-01-01 22:00")
In [162]: hour = pd.offsets.Hour()
In [163]: hour + ts
Out[163]: Timestamp('2014-01-01 23:00:00')
In [164]: (hour + ts).normalize()
Out[164]: Timestamp('2014-01-01 00:00:00')
In [165]: (hour + pd.Timestamp("2014-01-01 23:30")).normalize()
Out[165]: Timestamp('2014-01-02 00:00:00')
Parametric offsets#
Some of the offsets can be “parameterized” when created to result in different
behaviors. For example, the Week offset for generating weekly data accepts a
weekday parameter which results in the generated dates always lying on a
particular day of the week:
In [166]: d = datetime.datetime(2008, 8, 18, 9, 0)
In [167]: d
Out[167]: datetime.datetime(2008, 8, 18, 9, 0)
In [168]: d + pd.offsets.Week()
Out[168]: Timestamp('2008-08-25 09:00:00')
In [169]: d + pd.offsets.Week(weekday=4)
Out[169]: Timestamp('2008-08-22 09:00:00')
In [170]: (d + pd.offsets.Week(weekday=4)).weekday()
Out[170]: 4
In [171]: d - pd.offsets.Week()
Out[171]: Timestamp('2008-08-11 09:00:00')
The normalize option will be effective for addition and subtraction.
In [172]: d + pd.offsets.Week(normalize=True)
Out[172]: Timestamp('2008-08-25 00:00:00')
In [173]: d - pd.offsets.Week(normalize=True)
Out[173]: Timestamp('2008-08-11 00:00:00')
Another example is parameterizing YearEnd with the specific ending month:
In [174]: d + pd.offsets.YearEnd()
Out[174]: Timestamp('2008-12-31 09:00:00')
In [175]: d + pd.offsets.YearEnd(month=6)
Out[175]: Timestamp('2009-06-30 09:00:00')
Using offsets with Series / DatetimeIndex#
Offsets can be used with either a Series or DatetimeIndex to
apply the offset to each element.
In [176]: rng = pd.date_range("2012-01-01", "2012-01-03")
In [177]: s = pd.Series(rng)
In [178]: rng
Out[178]: DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03'], dtype='datetime64[ns]', freq='D')
In [179]: rng + pd.DateOffset(months=2)
Out[179]: DatetimeIndex(['2012-03-01', '2012-03-02', '2012-03-03'], dtype='datetime64[ns]', freq=None)
In [180]: s + pd.DateOffset(months=2)
Out[180]:
0 2012-03-01
1 2012-03-02
2 2012-03-03
dtype: datetime64[ns]
In [181]: s - pd.DateOffset(months=2)
Out[181]:
0 2011-11-01
1 2011-11-02
2 2011-11-03
dtype: datetime64[ns]
If the offset class maps directly to a Timedelta (Day, Hour,
Minute, Second, Micro, Milli, Nano) it can be
used exactly like a Timedelta - see the
Timedelta section for more examples.
In [182]: s - pd.offsets.Day(2)
Out[182]:
0 2011-12-30
1 2011-12-31
2 2012-01-01
dtype: datetime64[ns]
In [183]: td = s - pd.Series(pd.date_range("2011-12-29", "2011-12-31"))
In [184]: td
Out[184]:
0 3 days
1 3 days
2 3 days
dtype: timedelta64[ns]
In [185]: td + pd.offsets.Minute(15)
Out[185]:
0 3 days 00:15:00
1 3 days 00:15:00
2 3 days 00:15:00
dtype: timedelta64[ns]
Note that some offsets (such as BQuarterEnd) do not have a
vectorized implementation. They can still be used but may
calculate significantly slower and will show a PerformanceWarning
In [186]: rng + pd.offsets.BQuarterEnd()
Out[186]: DatetimeIndex(['2012-03-30', '2012-03-30', '2012-03-30'], dtype='datetime64[ns]', freq=None)
Custom business days#
The CDay or CustomBusinessDay class provides a parametric
BusinessDay class which can be used to create customized business day
calendars which account for local holidays and local weekend conventions.
As an interesting example, let’s look at Egypt where a Friday-Saturday weekend is observed.
In [187]: weekmask_egypt = "Sun Mon Tue Wed Thu"
# They also observe International Workers' Day so let's
# add that for a couple of years
In [188]: holidays = [
.....: "2012-05-01",
.....: datetime.datetime(2013, 5, 1),
.....: np.datetime64("2014-05-01"),
.....: ]
.....:
In [189]: bday_egypt = pd.offsets.CustomBusinessDay(
.....: holidays=holidays,
.....: weekmask=weekmask_egypt,
.....: )
.....:
In [190]: dt = datetime.datetime(2013, 4, 30)
In [191]: dt + 2 * bday_egypt
Out[191]: Timestamp('2013-05-05 00:00:00')
Let’s map to the weekday names:
In [192]: dts = pd.date_range(dt, periods=5, freq=bday_egypt)
In [193]: pd.Series(dts.weekday, dts).map(pd.Series("Mon Tue Wed Thu Fri Sat Sun".split()))
Out[193]:
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, dtype: object
Holiday calendars can be used to provide the list of holidays. See the
holiday calendar section for more information.
In [194]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [195]: bday_us = pd.offsets.CustomBusinessDay(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [196]: dt = datetime.datetime(2014, 1, 17)
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [197]: dt + bday_us
Out[197]: Timestamp('2014-01-21 00:00:00')
Monthly offsets that respect a certain holiday calendar can be defined
in the usual way.
In [198]: bmth_us = pd.offsets.CustomBusinessMonthBegin(calendar=USFederalHolidayCalendar())
# Skip new years
In [199]: dt = datetime.datetime(2013, 12, 17)
In [200]: dt + bmth_us
Out[200]: Timestamp('2014-01-02 00:00:00')
# Define date index with custom offset
In [201]: pd.date_range(start="20100101", end="20120101", freq=bmth_us)
Out[201]:
DatetimeIndex(['2010-01-04', '2010-02-01', '2010-03-01', '2010-04-01',
'2010-05-03', '2010-06-01', '2010-07-01', '2010-08-02',
'2010-09-01', '2010-10-01', '2010-11-01', '2010-12-01',
'2011-01-03', '2011-02-01', '2011-03-01', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-01', '2011-10-03', '2011-11-01', '2011-12-01'],
dtype='datetime64[ns]', freq='CBMS')
Note
The frequency string ‘C’ is used to indicate that a CustomBusinessDay
DateOffset is used, it is important to note that since CustomBusinessDay is
a parameterised type, instances of CustomBusinessDay may differ and this is
not detectable from the ‘C’ frequency string. The user therefore needs to
ensure that the ‘C’ frequency string is used consistently within the user’s
application.
Business hour#
The BusinessHour class provides a business hour representation on BusinessDay,
allowing to use specific start and end times.
By default, BusinessHour uses 9:00 - 17:00 as business hours.
Adding BusinessHour will increment Timestamp by hourly frequency.
If target Timestamp is out of business hours, move to the next business hour
then increment it. If the result exceeds the business hours end, the remaining
hours are added to the next business day.
In [202]: bh = pd.offsets.BusinessHour()
In [203]: bh
Out[203]: <BusinessHour: BH=09:00-17:00>
# 2014-08-01 is Friday
In [204]: pd.Timestamp("2014-08-01 10:00").weekday()
Out[204]: 4
In [205]: pd.Timestamp("2014-08-01 10:00") + bh
Out[205]: Timestamp('2014-08-01 11:00:00')
# Below example is the same as: pd.Timestamp('2014-08-01 09:00') + bh
In [206]: pd.Timestamp("2014-08-01 08:00") + bh
Out[206]: Timestamp('2014-08-01 10:00:00')
# If the results is on the end time, move to the next business day
In [207]: pd.Timestamp("2014-08-01 16:00") + bh
Out[207]: Timestamp('2014-08-04 09:00:00')
# Remainings are added to the next day
In [208]: pd.Timestamp("2014-08-01 16:30") + bh
Out[208]: Timestamp('2014-08-04 09:30:00')
# Adding 2 business hours
In [209]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(2)
Out[209]: Timestamp('2014-08-01 12:00:00')
# Subtracting 3 business hours
In [210]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(-3)
Out[210]: Timestamp('2014-07-31 15:00:00')
You can also specify start and end time by keywords. The argument must
be a str with an hour:minute representation or a datetime.time
instance. Specifying seconds, microseconds and nanoseconds as business hour
results in ValueError.
In [211]: bh = pd.offsets.BusinessHour(start="11:00", end=datetime.time(20, 0))
In [212]: bh
Out[212]: <BusinessHour: BH=11:00-20:00>
In [213]: pd.Timestamp("2014-08-01 13:00") + bh
Out[213]: Timestamp('2014-08-01 14:00:00')
In [214]: pd.Timestamp("2014-08-01 09:00") + bh
Out[214]: Timestamp('2014-08-01 12:00:00')
In [215]: pd.Timestamp("2014-08-01 18:00") + bh
Out[215]: Timestamp('2014-08-01 19:00:00')
Passing start time later than end represents midnight business hour.
In this case, business hour exceeds midnight and overlap to the next day.
Valid business hours are distinguished by whether it started from valid BusinessDay.
In [216]: bh = pd.offsets.BusinessHour(start="17:00", end="09:00")
In [217]: bh
Out[217]: <BusinessHour: BH=17:00-09:00>
In [218]: pd.Timestamp("2014-08-01 17:00") + bh
Out[218]: Timestamp('2014-08-01 18:00:00')
In [219]: pd.Timestamp("2014-08-01 23:00") + bh
Out[219]: Timestamp('2014-08-02 00:00:00')
# Although 2014-08-02 is Saturday,
# it is valid because it starts from 08-01 (Friday).
In [220]: pd.Timestamp("2014-08-02 04:00") + bh
Out[220]: Timestamp('2014-08-02 05:00:00')
# Although 2014-08-04 is Monday,
# it is out of business hours because it starts from 08-03 (Sunday).
In [221]: pd.Timestamp("2014-08-04 04:00") + bh
Out[221]: Timestamp('2014-08-04 18:00:00')
Applying BusinessHour.rollforward and rollback to out of business hours results in
the next business hour start or previous day’s end. Different from other offsets, BusinessHour.rollforward
may output different results from apply by definition.
This is because one day’s business hour end is equal to next day’s business hour start. For example,
under the default business hours (9:00 - 17:00), there is no gap (0 minutes) between 2014-08-01 17:00 and
2014-08-04 09:00.
# This adjusts a Timestamp to business hour edge
In [222]: pd.offsets.BusinessHour().rollback(pd.Timestamp("2014-08-02 15:00"))
Out[222]: Timestamp('2014-08-01 17:00:00')
In [223]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02 15:00"))
Out[223]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessHour() + pd.Timestamp('2014-08-01 17:00').
# And it is the same as BusinessHour() + pd.Timestamp('2014-08-04 09:00')
In [224]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02 15:00")
Out[224]: Timestamp('2014-08-04 10:00:00')
# BusinessDay results (for reference)
In [225]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02"))
Out[225]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessDay() + pd.Timestamp('2014-08-01')
# The result is the same as rollworward because BusinessDay never overlap.
In [226]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02")
Out[226]: Timestamp('2014-08-04 10:00:00')
BusinessHour regards Saturday and Sunday as holidays. To use arbitrary
holidays, you can use CustomBusinessHour offset, as explained in the
following subsection.
Custom business hour#
The CustomBusinessHour is a mixture of BusinessHour and CustomBusinessDay which
allows you to specify arbitrary holidays. CustomBusinessHour works as the same
as BusinessHour except that it skips specified custom holidays.
In [227]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [228]: bhour_us = pd.offsets.CustomBusinessHour(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [229]: dt = datetime.datetime(2014, 1, 17, 15)
In [230]: dt + bhour_us
Out[230]: Timestamp('2014-01-17 16:00:00')
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [231]: dt + bhour_us * 2
Out[231]: Timestamp('2014-01-21 09:00:00')
You can use keyword arguments supported by either BusinessHour and CustomBusinessDay.
In [232]: bhour_mon = pd.offsets.CustomBusinessHour(start="10:00", weekmask="Tue Wed Thu Fri")
# Monday is skipped because it's a holiday, business hour starts from 10:00
In [233]: dt + bhour_mon * 2
Out[233]: Timestamp('2014-01-21 10:00:00')
Offset aliases#
A number of string aliases are given to useful common time series
frequencies. We will refer to these aliases as offset aliases.
Alias
Description
B
business day frequency
C
custom business day frequency
D
calendar day frequency
W
weekly frequency
M
month end frequency
SM
semi-month end frequency (15th and end of month)
BM
business month end frequency
CBM
custom business month end frequency
MS
month start frequency
SMS
semi-month start frequency (1st and 15th)
BMS
business month start frequency
CBMS
custom business month start frequency
Q
quarter end frequency
BQ
business quarter end frequency
QS
quarter start frequency
BQS
business quarter start frequency
A, Y
year end frequency
BA, BY
business year end frequency
AS, YS
year start frequency
BAS, BYS
business year start frequency
BH
business hour frequency
H
hourly frequency
T, min
minutely frequency
S
secondly frequency
L, ms
milliseconds
U, us
microseconds
N
nanoseconds
Note
When using the offset aliases above, it should be noted that functions
such as date_range(), bdate_range(), will only return
timestamps that are in the interval defined by start_date and
end_date. If the start_date does not correspond to the frequency,
the returned timestamps will start at the next valid timestamp, same for
end_date, the returned timestamps will stop at the previous valid
timestamp.
For example, for the offset MS, if the start_date is not the first
of the month, the returned timestamps will start with the first day of the
next month. If end_date is not the first day of a month, the last
returned timestamp will be the first day of the corresponding month.
In [234]: dates_lst_1 = pd.date_range("2020-01-06", "2020-04-03", freq="MS")
In [235]: dates_lst_1
Out[235]: DatetimeIndex(['2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
In [236]: dates_lst_2 = pd.date_range("2020-01-01", "2020-04-01", freq="MS")
In [237]: dates_lst_2
Out[237]: DatetimeIndex(['2020-01-01', '2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
We can see in the above example date_range() and
bdate_range() will only return the valid timestamps between the
start_date and end_date. If these are not valid timestamps for the
given frequency it will roll to the next value for start_date
(respectively previous for the end_date)
Combining aliases#
As we have seen previously, the alias and the offset instance are fungible in
most functions:
In [238]: pd.date_range(start, periods=5, freq="B")
Out[238]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
In [239]: pd.date_range(start, periods=5, freq=pd.offsets.BDay())
Out[239]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
You can combine together day and intraday offsets:
In [240]: pd.date_range(start, periods=10, freq="2h20min")
Out[240]:
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 02:20:00',
'2011-01-01 04:40:00', '2011-01-01 07:00:00',
'2011-01-01 09:20:00', '2011-01-01 11:40:00',
'2011-01-01 14:00:00', '2011-01-01 16:20:00',
'2011-01-01 18:40:00', '2011-01-01 21:00:00'],
dtype='datetime64[ns]', freq='140T')
In [241]: pd.date_range(start, periods=10, freq="1D10U")
Out[241]:
DatetimeIndex([ '2011-01-01 00:00:00', '2011-01-02 00:00:00.000010',
'2011-01-03 00:00:00.000020', '2011-01-04 00:00:00.000030',
'2011-01-05 00:00:00.000040', '2011-01-06 00:00:00.000050',
'2011-01-07 00:00:00.000060', '2011-01-08 00:00:00.000070',
'2011-01-09 00:00:00.000080', '2011-01-10 00:00:00.000090'],
dtype='datetime64[ns]', freq='86400000010U')
Anchored offsets#
For some frequencies you can specify an anchoring suffix:
Alias
Description
W-SUN
weekly frequency (Sundays). Same as ‘W’
W-MON
weekly frequency (Mondays)
W-TUE
weekly frequency (Tuesdays)
W-WED
weekly frequency (Wednesdays)
W-THU
weekly frequency (Thursdays)
W-FRI
weekly frequency (Fridays)
W-SAT
weekly frequency (Saturdays)
(B)Q(S)-DEC
quarterly frequency, year ends in December. Same as ‘Q’
(B)Q(S)-JAN
quarterly frequency, year ends in January
(B)Q(S)-FEB
quarterly frequency, year ends in February
(B)Q(S)-MAR
quarterly frequency, year ends in March
(B)Q(S)-APR
quarterly frequency, year ends in April
(B)Q(S)-MAY
quarterly frequency, year ends in May
(B)Q(S)-JUN
quarterly frequency, year ends in June
(B)Q(S)-JUL
quarterly frequency, year ends in July
(B)Q(S)-AUG
quarterly frequency, year ends in August
(B)Q(S)-SEP
quarterly frequency, year ends in September
(B)Q(S)-OCT
quarterly frequency, year ends in October
(B)Q(S)-NOV
quarterly frequency, year ends in November
(B)A(S)-DEC
annual frequency, anchored end of December. Same as ‘A’
(B)A(S)-JAN
annual frequency, anchored end of January
(B)A(S)-FEB
annual frequency, anchored end of February
(B)A(S)-MAR
annual frequency, anchored end of March
(B)A(S)-APR
annual frequency, anchored end of April
(B)A(S)-MAY
annual frequency, anchored end of May
(B)A(S)-JUN
annual frequency, anchored end of June
(B)A(S)-JUL
annual frequency, anchored end of July
(B)A(S)-AUG
annual frequency, anchored end of August
(B)A(S)-SEP
annual frequency, anchored end of September
(B)A(S)-OCT
annual frequency, anchored end of October
(B)A(S)-NOV
annual frequency, anchored end of November
These can be used as arguments to date_range, bdate_range, constructors
for DatetimeIndex, as well as various other timeseries-related functions
in pandas.
Anchored offset semantics#
For those offsets that are anchored to the start or end of specific
frequency (MonthEnd, MonthBegin, WeekEnd, etc), the following
rules apply to rolling forward and backwards.
When n is not 0, if the given date is not on an anchor point, it snapped to the next(previous)
anchor point, and moved |n|-1 additional steps forwards or backwards.
In [242]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=1)
Out[242]: Timestamp('2014-02-01 00:00:00')
In [243]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=1)
Out[243]: Timestamp('2014-01-31 00:00:00')
In [244]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=1)
Out[244]: Timestamp('2014-01-01 00:00:00')
In [245]: pd.Timestamp("2014-01-02") - pd.offsets.MonthEnd(n=1)
Out[245]: Timestamp('2013-12-31 00:00:00')
In [246]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=4)
Out[246]: Timestamp('2014-05-01 00:00:00')
In [247]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=4)
Out[247]: Timestamp('2013-10-01 00:00:00')
If the given date is on an anchor point, it is moved |n| points forwards
or backwards.
In [248]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=1)
Out[248]: Timestamp('2014-02-01 00:00:00')
In [249]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=1)
Out[249]: Timestamp('2014-02-28 00:00:00')
In [250]: pd.Timestamp("2014-01-01") - pd.offsets.MonthBegin(n=1)
Out[250]: Timestamp('2013-12-01 00:00:00')
In [251]: pd.Timestamp("2014-01-31") - pd.offsets.MonthEnd(n=1)
Out[251]: Timestamp('2013-12-31 00:00:00')
In [252]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=4)
Out[252]: Timestamp('2014-05-01 00:00:00')
In [253]: pd.Timestamp("2014-01-31") - pd.offsets.MonthBegin(n=4)
Out[253]: Timestamp('2013-10-01 00:00:00')
For the case when n=0, the date is not moved if on an anchor point, otherwise
it is rolled forward to the next anchor point.
In [254]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=0)
Out[254]: Timestamp('2014-02-01 00:00:00')
In [255]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=0)
Out[255]: Timestamp('2014-01-31 00:00:00')
In [256]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=0)
Out[256]: Timestamp('2014-01-01 00:00:00')
In [257]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=0)
Out[257]: Timestamp('2014-01-31 00:00:00')
Holidays / holiday calendars#
Holidays and calendars provide a simple way to define holiday rules to be used
with CustomBusinessDay or in other analysis that requires a predefined
set of holidays. The AbstractHolidayCalendar class provides all the necessary
methods to return a list of holidays and only rules need to be defined
in a specific holiday calendar class. Furthermore, the start_date and end_date
class attributes determine over what date range holidays are generated. These
should be overwritten on the AbstractHolidayCalendar class to have the range
apply to all calendar subclasses. USFederalHolidayCalendar is the
only calendar that exists and primarily serves as an example for developing
other calendars.
For holidays that occur on fixed dates (e.g., US Memorial Day or July 4th) an
observance rule determines when that holiday is observed if it falls on a weekend
or some other non-observed day. Defined observance rules are:
Rule
Description
nearest_workday
move Saturday to Friday and Sunday to Monday
sunday_to_monday
move Sunday to following Monday
next_monday_or_tuesday
move Saturday to Monday and Sunday/Monday to Tuesday
previous_friday
move Saturday and Sunday to previous Friday”
next_monday
move Saturday and Sunday to following Monday
An example of how holidays and holiday calendars are defined:
In [258]: from pandas.tseries.holiday import (
.....: Holiday,
.....: USMemorialDay,
.....: AbstractHolidayCalendar,
.....: nearest_workday,
.....: MO,
.....: )
.....:
In [259]: class ExampleCalendar(AbstractHolidayCalendar):
.....: rules = [
.....: USMemorialDay,
.....: Holiday("July 4th", month=7, day=4, observance=nearest_workday),
.....: Holiday(
.....: "Columbus Day",
.....: month=10,
.....: day=1,
.....: offset=pd.DateOffset(weekday=MO(2)),
.....: ),
.....: ]
.....:
In [260]: cal = ExampleCalendar()
In [261]: cal.holidays(datetime.datetime(2012, 1, 1), datetime.datetime(2012, 12, 31))
Out[261]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
hint
weekday=MO(2) is same as 2 * Week(weekday=2)
Using this calendar, creating an index or doing offset arithmetic skips weekends
and holidays (i.e., Memorial Day/July 4th). For example, the below defines
a custom business day offset using the ExampleCalendar. Like any other offset,
it can be used to create a DatetimeIndex or added to datetime
or Timestamp objects.
In [262]: pd.date_range(
.....: start="7/1/2012", end="7/10/2012", freq=pd.offsets.CDay(calendar=cal)
.....: ).to_pydatetime()
.....:
Out[262]:
array([datetime.datetime(2012, 7, 2, 0, 0),
datetime.datetime(2012, 7, 3, 0, 0),
datetime.datetime(2012, 7, 5, 0, 0),
datetime.datetime(2012, 7, 6, 0, 0),
datetime.datetime(2012, 7, 9, 0, 0),
datetime.datetime(2012, 7, 10, 0, 0)], dtype=object)
In [263]: offset = pd.offsets.CustomBusinessDay(calendar=cal)
In [264]: datetime.datetime(2012, 5, 25) + offset
Out[264]: Timestamp('2012-05-29 00:00:00')
In [265]: datetime.datetime(2012, 7, 3) + offset
Out[265]: Timestamp('2012-07-05 00:00:00')
In [266]: datetime.datetime(2012, 7, 3) + 2 * offset
Out[266]: Timestamp('2012-07-06 00:00:00')
In [267]: datetime.datetime(2012, 7, 6) + offset
Out[267]: Timestamp('2012-07-09 00:00:00')
Ranges are defined by the start_date and end_date class attributes
of AbstractHolidayCalendar. The defaults are shown below.
In [268]: AbstractHolidayCalendar.start_date
Out[268]: Timestamp('1970-01-01 00:00:00')
In [269]: AbstractHolidayCalendar.end_date
Out[269]: Timestamp('2200-12-31 00:00:00')
These dates can be overwritten by setting the attributes as
datetime/Timestamp/string.
In [270]: AbstractHolidayCalendar.start_date = datetime.datetime(2012, 1, 1)
In [271]: AbstractHolidayCalendar.end_date = datetime.datetime(2012, 12, 31)
In [272]: cal.holidays()
Out[272]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
Every calendar class is accessible by name using the get_calendar function
which returns a holiday class instance. Any imported calendar class will
automatically be available by this function. Also, HolidayCalendarFactory
provides an easy interface to create calendars that are combinations of calendars
or calendars with additional rules.
In [273]: from pandas.tseries.holiday import get_calendar, HolidayCalendarFactory, USLaborDay
In [274]: cal = get_calendar("ExampleCalendar")
In [275]: cal.rules
Out[275]:
[Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
In [276]: new_cal = HolidayCalendarFactory("NewExampleCalendar", cal, USLaborDay)
In [277]: new_cal.rules
Out[277]:
[Holiday: Labor Day (month=9, day=1, offset=<DateOffset: weekday=MO(+1)>),
Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
Time Series-related instance methods#
Shifting / lagging#
One may want to shift or lag the values in a time series back and forward in
time. The method for this is shift(), which is available on all of
the pandas objects.
In [278]: ts = pd.Series(range(len(rng)), index=rng)
In [279]: ts = ts[:5]
In [280]: ts.shift(1)
Out[280]:
2012-01-01 NaN
2012-01-02 0.0
2012-01-03 1.0
Freq: D, dtype: float64
The shift method accepts an freq argument which can accept a
DateOffset class or other timedelta-like object or also an
offset alias.
When freq is specified, shift method changes all the dates in the index
rather than changing the alignment of the data and the index:
In [281]: ts.shift(5, freq="D")
Out[281]:
2012-01-06 0
2012-01-07 1
2012-01-08 2
Freq: D, dtype: int64
In [282]: ts.shift(5, freq=pd.offsets.BDay())
Out[282]:
2012-01-06 0
2012-01-09 1
2012-01-10 2
dtype: int64
In [283]: ts.shift(5, freq="BM")
Out[283]:
2012-05-31 0
2012-05-31 1
2012-05-31 2
dtype: int64
Note that with when freq is specified, the leading entry is no longer NaN
because the data is not being realigned.
Frequency conversion#
The primary function for changing frequencies is the asfreq()
method. For a DatetimeIndex, this is basically just a thin, but convenient
wrapper around reindex() which generates a date_range and
calls reindex.
In [284]: dr = pd.date_range("1/1/2010", periods=3, freq=3 * pd.offsets.BDay())
In [285]: ts = pd.Series(np.random.randn(3), index=dr)
In [286]: ts
Out[286]:
2010-01-01 1.494522
2010-01-06 -0.778425
2010-01-11 -0.253355
Freq: 3B, dtype: float64
In [287]: ts.asfreq(pd.offsets.BDay())
Out[287]:
2010-01-01 1.494522
2010-01-04 NaN
2010-01-05 NaN
2010-01-06 -0.778425
2010-01-07 NaN
2010-01-08 NaN
2010-01-11 -0.253355
Freq: B, dtype: float64
asfreq provides a further convenience so you can specify an interpolation
method for any gaps that may appear after the frequency conversion.
In [288]: ts.asfreq(pd.offsets.BDay(), method="pad")
Out[288]:
2010-01-01 1.494522
2010-01-04 1.494522
2010-01-05 1.494522
2010-01-06 -0.778425
2010-01-07 -0.778425
2010-01-08 -0.778425
2010-01-11 -0.253355
Freq: B, dtype: float64
Filling forward / backward#
Related to asfreq and reindex is fillna(), which is
documented in the missing data section.
Converting to Python datetimes#
DatetimeIndex can be converted to an array of Python native
datetime.datetime objects using the to_pydatetime method.
Resampling#
pandas has a simple, powerful, and efficient functionality for performing
resampling operations during frequency conversion (e.g., converting secondly
data into 5-minutely data). This is extremely common in, but not limited to,
financial applications.
resample() is a time-based groupby, followed by a reduction method
on each of its groups. See some cookbook examples for
some advanced strategies.
The resample() method can be used directly from DataFrameGroupBy objects,
see the groupby docs.
Basics#
In [289]: rng = pd.date_range("1/1/2012", periods=100, freq="S")
In [290]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
In [291]: ts.resample("5Min").sum()
Out[291]:
2012-01-01 25103
Freq: 5T, dtype: int64
The resample function is very flexible and allows you to specify many
different parameters to control the frequency conversion and resampling
operation.
Any function available via dispatching is available as
a method of the returned object, including sum, mean, std, sem,
max, min, median, first, last, ohlc:
In [292]: ts.resample("5Min").mean()
Out[292]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [293]: ts.resample("5Min").ohlc()
Out[293]:
open high low close
2012-01-01 308 460 9 205
In [294]: ts.resample("5Min").max()
Out[294]:
2012-01-01 460
Freq: 5T, dtype: int64
For downsampling, closed can be set to ‘left’ or ‘right’ to specify which
end of the interval is closed:
In [295]: ts.resample("5Min", closed="right").mean()
Out[295]:
2011-12-31 23:55:00 308.000000
2012-01-01 00:00:00 250.454545
Freq: 5T, dtype: float64
In [296]: ts.resample("5Min", closed="left").mean()
Out[296]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Parameters like label are used to manipulate the resulting labels.
label specifies whether the result is labeled with the beginning or
the end of the interval.
In [297]: ts.resample("5Min").mean() # by default label='left'
Out[297]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [298]: ts.resample("5Min", label="left").mean()
Out[298]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Warning
The default values for label and closed is ‘left’ for all
frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’
which all have a default of ‘right’.
This might unintendedly lead to looking ahead, where the value for a later
time is pulled back to a previous time as in the following example with
the BusinessDay frequency:
In [299]: s = pd.date_range("2000-01-01", "2000-01-05").to_series()
In [300]: s.iloc[2] = pd.NaT
In [301]: s.dt.day_name()
Out[301]:
2000-01-01 Saturday
2000-01-02 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: D, dtype: object
# default: label='left', closed='left'
In [302]: s.resample("B").last().dt.day_name()
Out[302]:
1999-12-31 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
Notice how the value for Sunday got pulled back to the previous Friday.
To get the behavior where the value for Sunday is pushed to Monday, use
instead
In [303]: s.resample("B", label="right", closed="right").last().dt.day_name()
Out[303]:
2000-01-03 Sunday
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
The axis parameter can be set to 0 or 1 and allows you to resample the
specified axis for a DataFrame.
kind can be set to ‘timestamp’ or ‘period’ to convert the resulting index
to/from timestamp and time span representations. By default resample
retains the input representation.
convention can be set to ‘start’ or ‘end’ when resampling period data
(detail below). It specifies how low frequency periods are converted to higher
frequency periods.
Upsampling#
For upsampling, you can specify a way to upsample and the limit parameter to interpolate over the gaps that are created:
# from secondly to every 250 milliseconds
In [304]: ts[:2].resample("250L").asfreq()
Out[304]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 NaN
2012-01-01 00:00:00.500 NaN
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
In [305]: ts[:2].resample("250L").ffill()
Out[305]:
2012-01-01 00:00:00.000 308
2012-01-01 00:00:00.250 308
2012-01-01 00:00:00.500 308
2012-01-01 00:00:00.750 308
2012-01-01 00:00:01.000 204
Freq: 250L, dtype: int64
In [306]: ts[:2].resample("250L").ffill(limit=2)
Out[306]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 308.0
2012-01-01 00:00:00.500 308.0
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
Sparse resampling#
Sparse timeseries are the ones where you have a lot fewer points relative
to the amount of time you are looking to resample. Naively upsampling a sparse
series can potentially generate lots of intermediate values. When you don’t want
to use a method to fill these values, e.g. fill_method is None, then
intermediate values will be filled with NaN.
Since resample is a time-based groupby, the following is a method to efficiently
resample only the groups that are not all NaN.
In [307]: rng = pd.date_range("2014-1-1", periods=100, freq="D") + pd.Timedelta("1s")
In [308]: ts = pd.Series(range(100), index=rng)
If we want to resample to the full range of the series:
In [309]: ts.resample("3T").sum()
Out[309]:
2014-01-01 00:00:00 0
2014-01-01 00:03:00 0
2014-01-01 00:06:00 0
2014-01-01 00:09:00 0
2014-01-01 00:12:00 0
..
2014-04-09 23:48:00 0
2014-04-09 23:51:00 0
2014-04-09 23:54:00 0
2014-04-09 23:57:00 0
2014-04-10 00:00:00 99
Freq: 3T, Length: 47521, dtype: int64
We can instead only resample those groups where we have points as follows:
In [310]: from functools import partial
In [311]: from pandas.tseries.frequencies import to_offset
In [312]: def round(t, freq):
.....: freq = to_offset(freq)
.....: return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)
.....:
In [313]: ts.groupby(partial(round, freq="3T")).sum()
Out[313]:
2014-01-01 0
2014-01-02 1
2014-01-03 2
2014-01-04 3
2014-01-05 4
..
2014-04-06 95
2014-04-07 96
2014-04-08 97
2014-04-09 98
2014-04-10 99
Length: 100, dtype: int64
Aggregation#
Similar to the aggregating API, groupby API, and the window API,
a Resampler can be selectively resampled.
Resampling a DataFrame, the default will be to act on all columns with the same function.
In [314]: df = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2012", freq="S", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [315]: r = df.resample("3T")
In [316]: r.mean()
Out[316]:
A B C
2012-01-01 00:00:00 -0.033823 -0.121514 -0.081447
2012-01-01 00:03:00 0.056909 0.146731 -0.024320
2012-01-01 00:06:00 -0.058837 0.047046 -0.052021
2012-01-01 00:09:00 0.063123 -0.026158 -0.066533
2012-01-01 00:12:00 0.186340 -0.003144 0.074752
2012-01-01 00:15:00 -0.085954 -0.016287 -0.050046
We can select a specific column or columns using standard getitem.
In [317]: r["A"].mean()
Out[317]:
2012-01-01 00:00:00 -0.033823
2012-01-01 00:03:00 0.056909
2012-01-01 00:06:00 -0.058837
2012-01-01 00:09:00 0.063123
2012-01-01 00:12:00 0.186340
2012-01-01 00:15:00 -0.085954
Freq: 3T, Name: A, dtype: float64
In [318]: r[["A", "B"]].mean()
Out[318]:
A B
2012-01-01 00:00:00 -0.033823 -0.121514
2012-01-01 00:03:00 0.056909 0.146731
2012-01-01 00:06:00 -0.058837 0.047046
2012-01-01 00:09:00 0.063123 -0.026158
2012-01-01 00:12:00 0.186340 -0.003144
2012-01-01 00:15:00 -0.085954 -0.016287
You can pass a list or dict of functions to do aggregation with, outputting a DataFrame:
In [319]: r["A"].agg([np.sum, np.mean, np.std])
Out[319]:
sum mean std
2012-01-01 00:00:00 -6.088060 -0.033823 1.043263
2012-01-01 00:03:00 10.243678 0.056909 1.058534
2012-01-01 00:06:00 -10.590584 -0.058837 0.949264
2012-01-01 00:09:00 11.362228 0.063123 1.028096
2012-01-01 00:12:00 33.541257 0.186340 0.884586
2012-01-01 00:15:00 -8.595393 -0.085954 1.035476
On a resampled DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [320]: r.agg([np.sum, np.mean])
Out[320]:
A ... C
sum mean ... sum mean
2012-01-01 00:00:00 -6.088060 -0.033823 ... -14.660515 -0.081447
2012-01-01 00:03:00 10.243678 0.056909 ... -4.377642 -0.024320
2012-01-01 00:06:00 -10.590584 -0.058837 ... -9.363825 -0.052021
2012-01-01 00:09:00 11.362228 0.063123 ... -11.975895 -0.066533
2012-01-01 00:12:00 33.541257 0.186340 ... 13.455299 0.074752
2012-01-01 00:15:00 -8.595393 -0.085954 ... -5.004580 -0.050046
[6 rows x 6 columns]
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [321]: r.agg({"A": np.sum, "B": lambda x: np.std(x, ddof=1)})
Out[321]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
The function names can also be strings. In order for a string to be valid it
must be implemented on the resampled object:
In [322]: r.agg({"A": "sum", "B": "std"})
Out[322]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
Furthermore, you can also specify multiple aggregation functions for each column separately.
In [323]: r.agg({"A": ["sum", "std"], "B": ["mean", "std"]})
Out[323]:
A B
sum std mean std
2012-01-01 00:00:00 -6.088060 1.043263 -0.121514 1.001294
2012-01-01 00:03:00 10.243678 1.058534 0.146731 1.074597
2012-01-01 00:06:00 -10.590584 0.949264 0.047046 0.987309
2012-01-01 00:09:00 11.362228 1.028096 -0.026158 0.944953
2012-01-01 00:12:00 33.541257 0.884586 -0.003144 1.095025
2012-01-01 00:15:00 -8.595393 1.035476 -0.016287 1.035312
If a DataFrame does not have a datetimelike index, but instead you want
to resample based on datetimelike column in the frame, it can passed to the
on keyword.
In [324]: df = pd.DataFrame(
.....: {"date": pd.date_range("2015-01-01", freq="W", periods=5), "a": np.arange(5)},
.....: index=pd.MultiIndex.from_arrays(
.....: [[1, 2, 3, 4, 5], pd.date_range("2015-01-01", freq="W", periods=5)],
.....: names=["v", "d"],
.....: ),
.....: )
.....:
In [325]: df
Out[325]:
date a
v d
1 2015-01-04 2015-01-04 0
2 2015-01-11 2015-01-11 1
3 2015-01-18 2015-01-18 2
4 2015-01-25 2015-01-25 3
5 2015-02-01 2015-02-01 4
In [326]: df.resample("M", on="date")[["a"]].sum()
Out[326]:
a
date
2015-01-31 6
2015-02-28 4
Similarly, if you instead want to resample by a datetimelike
level of MultiIndex, its name or location can be passed to the
level keyword.
In [327]: df.resample("M", level="d")[["a"]].sum()
Out[327]:
a
d
2015-01-31 6
2015-02-28 4
Iterating through groups#
With the Resampler object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [328]: small = pd.Series(
.....: range(6),
.....: index=pd.to_datetime(
.....: [
.....: "2017-01-01T00:00:00",
.....: "2017-01-01T00:30:00",
.....: "2017-01-01T00:31:00",
.....: "2017-01-01T01:00:00",
.....: "2017-01-01T03:00:00",
.....: "2017-01-01T03:05:00",
.....: ]
.....: ),
.....: )
.....:
In [329]: resampled = small.resample("H")
In [330]: for name, group in resampled:
.....: print("Group: ", name)
.....: print("-" * 27)
.....: print(group, end="\n\n")
.....:
Group: 2017-01-01 00:00:00
---------------------------
2017-01-01 00:00:00 0
2017-01-01 00:30:00 1
2017-01-01 00:31:00 2
dtype: int64
Group: 2017-01-01 01:00:00
---------------------------
2017-01-01 01:00:00 3
dtype: int64
Group: 2017-01-01 02:00:00
---------------------------
Series([], dtype: int64)
Group: 2017-01-01 03:00:00
---------------------------
2017-01-01 03:00:00 4
2017-01-01 03:05:00 5
dtype: int64
See Iterating through groups or Resampler.__iter__ for more.
Use origin or offset to adjust the start of the bins#
New in version 1.1.0.
The bins of the grouping are adjusted based on the beginning of the day of the time series starting point. This works well with frequencies that are multiples of a day (like 30D) or that divide a day evenly (like 90s or 1min). This can create inconsistencies with some frequencies that do not meet this criteria. To change this behavior you can specify a fixed Timestamp with the argument origin.
For example:
In [331]: start, end = "2000-10-01 23:30:00", "2000-10-02 00:30:00"
In [332]: middle = "2000-10-02 00:00:00"
In [333]: rng = pd.date_range(start, end, freq="7min")
In [334]: ts = pd.Series(np.arange(len(rng)) * 3, index=rng)
In [335]: ts
Out[335]:
2000-10-01 23:30:00 0
2000-10-01 23:37:00 3
2000-10-01 23:44:00 6
2000-10-01 23:51:00 9
2000-10-01 23:58:00 12
2000-10-02 00:05:00 15
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
Freq: 7T, dtype: int64
Here we can see that, when using origin with its default value ('start_day'), the result after '2000-10-02 00:00:00' are not identical depending on the start of time series:
In [336]: ts.resample("17min", origin="start_day").sum()
Out[336]:
2000-10-01 23:14:00 0
2000-10-01 23:31:00 9
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
Freq: 17T, dtype: int64
In [337]: ts[middle:end].resample("17min", origin="start_day").sum()
Out[337]:
2000-10-02 00:00:00 33
2000-10-02 00:17:00 45
Freq: 17T, dtype: int64
Here we can see that, when setting origin to 'epoch', the result after '2000-10-02 00:00:00' are identical depending on the start of time series:
In [338]: ts.resample("17min", origin="epoch").sum()
Out[338]:
2000-10-01 23:18:00 0
2000-10-01 23:35:00 18
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
In [339]: ts[middle:end].resample("17min", origin="epoch").sum()
Out[339]:
2000-10-01 23:52:00 15
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
If needed you can use a custom timestamp for origin:
In [340]: ts.resample("17min", origin="2001-01-01").sum()
Out[340]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [341]: ts[middle:end].resample("17min", origin=pd.Timestamp("2001-01-01")).sum()
Out[341]:
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
If needed you can just adjust the bins with an offset Timedelta that would be added to the default origin.
Those two examples are equivalent for this time series:
In [342]: ts.resample("17min", origin="start").sum()
Out[342]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [343]: ts.resample("17min", offset="23h30min").sum()
Out[343]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
Note the use of 'start' for origin on the last example. In that case, origin will be set to the first value of the timeseries.
Backward resample#
New in version 1.3.0.
Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given freq. The backward resample sets closed to 'right' by default since the last value should be considered as the edge point for the last bin.
We can set origin to 'end'. The value for a specific Timestamp index stands for the resample result from the current Timestamp minus freq to the current Timestamp with a right close.
In [344]: ts.resample('17min', origin='end').sum()
Out[344]:
2000-10-01 23:35:00 0
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
Freq: 17T, dtype: int64
Besides, in contrast with the 'start_day' option, end_day is supported. This will set the origin as the ceiling midnight of the largest Timestamp.
In [345]: ts.resample('17min', origin='end_day').sum()
Out[345]:
2000-10-01 23:38:00 3
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
Freq: 17T, dtype: int64
The above result uses 2000-10-02 00:29:00 as the last bin’s right edge since the following computation.
In [346]: ceil_mid = rng.max().ceil('D')
In [347]: freq = pd.offsets.Minute(17)
In [348]: bin_res = ceil_mid - freq * ((ceil_mid - rng.max()) // freq)
In [349]: bin_res
Out[349]: Timestamp('2000-10-02 00:29:00')
Time span representation#
Regular intervals of time are represented by Period objects in pandas while
sequences of Period objects are collected in a PeriodIndex, which can
be created with the convenience function period_range.
Period#
A Period represents a span of time (e.g., a day, a month, a quarter, etc).
You can specify the span via freq keyword using a frequency alias like below.
Because freq represents a span of Period, it cannot be negative like “-3D”.
In [350]: pd.Period("2012", freq="A-DEC")
Out[350]: Period('2012', 'A-DEC')
In [351]: pd.Period("2012-1-1", freq="D")
Out[351]: Period('2012-01-01', 'D')
In [352]: pd.Period("2012-1-1 19:00", freq="H")
Out[352]: Period('2012-01-01 19:00', 'H')
In [353]: pd.Period("2012-1-1 19:00", freq="5H")
Out[353]: Period('2012-01-01 19:00', '5H')
Adding and subtracting integers from periods shifts the period by its own
frequency. Arithmetic is not allowed between Period with different freq (span).
In [354]: p = pd.Period("2012", freq="A-DEC")
In [355]: p + 1
Out[355]: Period('2013', 'A-DEC')
In [356]: p - 3
Out[356]: Period('2009', 'A-DEC')
In [357]: p = pd.Period("2012-01", freq="2M")
In [358]: p + 2
Out[358]: Period('2012-05', '2M')
In [359]: p - 1
Out[359]: Period('2011-11', '2M')
In [360]: p == pd.Period("2012-01", freq="3M")
Out[360]: False
If Period freq is daily or higher (D, H, T, S, L, U, N), offsets and timedelta-like can be added if the result can have the same freq. Otherwise, ValueError will be raised.
In [361]: p = pd.Period("2014-07-01 09:00", freq="H")
In [362]: p + pd.offsets.Hour(2)
Out[362]: Period('2014-07-01 11:00', 'H')
In [363]: p + datetime.timedelta(minutes=120)
Out[363]: Period('2014-07-01 11:00', 'H')
In [364]: p + np.timedelta64(7200, "s")
Out[364]: Period('2014-07-01 11:00', 'H')
In [1]: p + pd.offsets.Minute(5)
Traceback
...
ValueError: Input has different freq from Period(freq=H)
If Period has other frequencies, only the same offsets can be added. Otherwise, ValueError will be raised.
In [365]: p = pd.Period("2014-07", freq="M")
In [366]: p + pd.offsets.MonthEnd(3)
Out[366]: Period('2014-10', 'M')
In [1]: p + pd.offsets.MonthBegin(3)
Traceback
...
ValueError: Input has different freq from Period(freq=M)
Taking the difference of Period instances with the same frequency will
return the number of frequency units between them:
In [367]: pd.Period("2012", freq="A-DEC") - pd.Period("2002", freq="A-DEC")
Out[367]: <10 * YearEnds: month=12>
PeriodIndex and period_range#
Regular sequences of Period objects can be collected in a PeriodIndex,
which can be constructed using the period_range convenience function:
In [368]: prng = pd.period_range("1/1/2011", "1/1/2012", freq="M")
In [369]: prng
Out[369]:
PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05', '2011-06',
'2011-07', '2011-08', '2011-09', '2011-10', '2011-11', '2011-12',
'2012-01'],
dtype='period[M]')
The PeriodIndex constructor can also be used directly:
In [370]: pd.PeriodIndex(["2011-1", "2011-2", "2011-3"], freq="M")
Out[370]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
Passing multiplied frequency outputs a sequence of Period which
has multiplied span.
In [371]: pd.period_range(start="2014-01", freq="3M", periods=4)
Out[371]: PeriodIndex(['2014-01', '2014-04', '2014-07', '2014-10'], dtype='period[3M]')
If start or end are Period objects, they will be used as anchor
endpoints for a PeriodIndex with frequency matching that of the
PeriodIndex constructor.
In [372]: pd.period_range(
.....: start=pd.Period("2017Q1", freq="Q"), end=pd.Period("2017Q2", freq="Q"), freq="M"
.....: )
.....:
Out[372]: PeriodIndex(['2017-03', '2017-04', '2017-05', '2017-06'], dtype='period[M]')
Just like DatetimeIndex, a PeriodIndex can also be used to index pandas
objects:
In [373]: ps = pd.Series(np.random.randn(len(prng)), prng)
In [374]: ps
Out[374]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
PeriodIndex supports addition and subtraction with the same rule as Period.
In [375]: idx = pd.period_range("2014-07-01 09:00", periods=5, freq="H")
In [376]: idx
Out[376]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]')
In [377]: idx + pd.offsets.Hour(2)
Out[377]:
PeriodIndex(['2014-07-01 11:00', '2014-07-01 12:00', '2014-07-01 13:00',
'2014-07-01 14:00', '2014-07-01 15:00'],
dtype='period[H]')
In [378]: idx = pd.period_range("2014-07", periods=5, freq="M")
In [379]: idx
Out[379]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype='period[M]')
In [380]: idx + pd.offsets.MonthEnd(3)
Out[380]: PeriodIndex(['2014-10', '2014-11', '2014-12', '2015-01', '2015-02'], dtype='period[M]')
PeriodIndex has its own dtype named period, refer to Period Dtypes.
Period dtypes#
PeriodIndex has a custom period dtype. This is a pandas extension
dtype similar to the timezone aware dtype (datetime64[ns, tz]).
The period dtype holds the freq attribute and is represented with
period[freq] like period[D] or period[M], using frequency strings.
In [381]: pi = pd.period_range("2016-01-01", periods=3, freq="M")
In [382]: pi
Out[382]: PeriodIndex(['2016-01', '2016-02', '2016-03'], dtype='period[M]')
In [383]: pi.dtype
Out[383]: period[M]
The period dtype can be used in .astype(...). It allows one to change the
freq of a PeriodIndex like .asfreq() and convert a
DatetimeIndex to PeriodIndex like to_period():
# change monthly freq to daily freq
In [384]: pi.astype("period[D]")
Out[384]: PeriodIndex(['2016-01-31', '2016-02-29', '2016-03-31'], dtype='period[D]')
# convert to DatetimeIndex
In [385]: pi.astype("datetime64[ns]")
Out[385]: DatetimeIndex(['2016-01-01', '2016-02-01', '2016-03-01'], dtype='datetime64[ns]', freq='MS')
# convert to PeriodIndex
In [386]: dti = pd.date_range("2011-01-01", freq="M", periods=3)
In [387]: dti
Out[387]: DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'], dtype='datetime64[ns]', freq='M')
In [388]: dti.astype("period[M]")
Out[388]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
PeriodIndex partial string indexing#
PeriodIndex now supports partial string slicing with non-monotonic indexes.
New in version 1.1.0.
You can pass in dates and strings to Series and DataFrame with PeriodIndex, in the same manner as DatetimeIndex. For details, refer to DatetimeIndex Partial String Indexing.
In [389]: ps["2011-01"]
Out[389]: -2.9169013294054507
In [390]: ps[datetime.datetime(2011, 12, 25):]
Out[390]:
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
In [391]: ps["10/31/2011":"12/31/2011"]
Out[391]:
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
Passing a string representing a lower frequency than PeriodIndex returns partial sliced data.
In [392]: ps["2011"]
Out[392]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
In [393]: dfp = pd.DataFrame(
.....: np.random.randn(600, 1),
.....: columns=["A"],
.....: index=pd.period_range("2013-01-01 9:00", periods=600, freq="T"),
.....: )
.....:
In [394]: dfp
Out[394]:
A
2013-01-01 09:00 -0.538468
2013-01-01 09:01 -1.365819
2013-01-01 09:02 -0.969051
2013-01-01 09:03 -0.331152
2013-01-01 09:04 -0.245334
... ...
2013-01-01 18:55 0.522460
2013-01-01 18:56 0.118710
2013-01-01 18:57 0.167517
2013-01-01 18:58 0.922883
2013-01-01 18:59 1.721104
[600 rows x 1 columns]
In [395]: dfp.loc["2013-01-01 10H"]
Out[395]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 10:55 -0.865621
2013-01-01 10:56 -1.167818
2013-01-01 10:57 -2.081748
2013-01-01 10:58 -0.527146
2013-01-01 10:59 0.802298
[60 rows x 1 columns]
As with DatetimeIndex, the endpoints will be included in the result. The example below slices data starting from 10:00 to 11:59.
In [396]: dfp["2013-01-01 10H":"2013-01-01 11H"]
Out[396]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 11:55 -0.590204
2013-01-01 11:56 1.539990
2013-01-01 11:57 -1.224826
2013-01-01 11:58 0.578798
2013-01-01 11:59 -0.685496
[120 rows x 1 columns]
Frequency conversion and resampling with PeriodIndex#
The frequency of Period and PeriodIndex can be converted via the asfreq
method. Let’s start with the fiscal year 2011, ending in December:
In [397]: p = pd.Period("2011", freq="A-DEC")
In [398]: p
Out[398]: Period('2011', 'A-DEC')
We can convert it to a monthly frequency. Using the how parameter, we can
specify whether to return the starting or ending month:
In [399]: p.asfreq("M", how="start")
Out[399]: Period('2011-01', 'M')
In [400]: p.asfreq("M", how="end")
Out[400]: Period('2011-12', 'M')
The shorthands ‘s’ and ‘e’ are provided for convenience:
In [401]: p.asfreq("M", "s")
Out[401]: Period('2011-01', 'M')
In [402]: p.asfreq("M", "e")
Out[402]: Period('2011-12', 'M')
Converting to a “super-period” (e.g., annual frequency is a super-period of
quarterly frequency) automatically returns the super-period that includes the
input period:
In [403]: p = pd.Period("2011-12", freq="M")
In [404]: p.asfreq("A-NOV")
Out[404]: Period('2012', 'A-NOV')
Note that since we converted to an annual frequency that ends the year in
November, the monthly period of December 2011 is actually in the 2012 A-NOV
period.
Period conversions with anchored frequencies are particularly useful for
working with various quarterly data common to economics, business, and other
fields. Many organizations define quarters relative to the month in which their
fiscal year starts and ends. Thus, first quarter of 2011 could start in 2010 or
a few months into 2011. Via anchored frequencies, pandas works for all quarterly
frequencies Q-JAN through Q-DEC.
Q-DEC define regular calendar quarters:
In [405]: p = pd.Period("2012Q1", freq="Q-DEC")
In [406]: p.asfreq("D", "s")
Out[406]: Period('2012-01-01', 'D')
In [407]: p.asfreq("D", "e")
Out[407]: Period('2012-03-31', 'D')
Q-MAR defines fiscal year end in March:
In [408]: p = pd.Period("2011Q4", freq="Q-MAR")
In [409]: p.asfreq("D", "s")
Out[409]: Period('2011-01-01', 'D')
In [410]: p.asfreq("D", "e")
Out[410]: Period('2011-03-31', 'D')
Converting between representations#
Timestamped data can be converted to PeriodIndex-ed data using to_period
and vice-versa using to_timestamp:
In [411]: rng = pd.date_range("1/1/2012", periods=5, freq="M")
In [412]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [413]: ts
Out[413]:
2012-01-31 1.931253
2012-02-29 -0.184594
2012-03-31 0.249656
2012-04-30 -0.978151
2012-05-31 -0.873389
Freq: M, dtype: float64
In [414]: ps = ts.to_period()
In [415]: ps
Out[415]:
2012-01 1.931253
2012-02 -0.184594
2012-03 0.249656
2012-04 -0.978151
2012-05 -0.873389
Freq: M, dtype: float64
In [416]: ps.to_timestamp()
Out[416]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Remember that ‘s’ and ‘e’ can be used to return the timestamps at the start or
end of the period:
In [417]: ps.to_timestamp("D", how="s")
Out[417]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic
functions to be used. In the following example, we convert a quarterly
frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [418]: prng = pd.period_range("1990Q1", "2000Q4", freq="Q-NOV")
In [419]: ts = pd.Series(np.random.randn(len(prng)), prng)
In [420]: ts.index = (prng.asfreq("M", "e") + 1).asfreq("H", "s") + 9
In [421]: ts.head()
Out[421]:
1990-03-01 09:00 -0.109291
1990-06-01 09:00 -0.637235
1990-09-01 09:00 -1.735925
1990-12-01 09:00 2.096946
1991-03-01 09:00 -1.039926
Freq: H, dtype: float64
Representing out-of-bounds spans#
If you have data that is outside of the Timestamp bounds, see Timestamp limitations,
then you can use a PeriodIndex and/or Series of Periods to do computations.
In [422]: span = pd.period_range("1215-01-01", "1381-01-01", freq="D")
In [423]: span
Out[423]:
PeriodIndex(['1215-01-01', '1215-01-02', '1215-01-03', '1215-01-04',
'1215-01-05', '1215-01-06', '1215-01-07', '1215-01-08',
'1215-01-09', '1215-01-10',
...
'1380-12-23', '1380-12-24', '1380-12-25', '1380-12-26',
'1380-12-27', '1380-12-28', '1380-12-29', '1380-12-30',
'1380-12-31', '1381-01-01'],
dtype='period[D]', length=60632)
To convert from an int64 based YYYYMMDD representation.
In [424]: s = pd.Series([20121231, 20141130, 99991231])
In [425]: s
Out[425]:
0 20121231
1 20141130
2 99991231
dtype: int64
In [426]: def conv(x):
.....: return pd.Period(year=x // 10000, month=x // 100 % 100, day=x % 100, freq="D")
.....:
In [427]: s.apply(conv)
Out[427]:
0 2012-12-31
1 2014-11-30
2 9999-12-31
dtype: period[D]
In [428]: s.apply(conv)[2]
Out[428]: Period('9999-12-31', 'D')
These can easily be converted to a PeriodIndex:
In [429]: span = pd.PeriodIndex(s.apply(conv))
In [430]: span
Out[430]: PeriodIndex(['2012-12-31', '2014-11-30', '9999-12-31'], dtype='period[D]')
Time zone handling#
pandas provides rich support for working with timestamps in different time
zones using the pytz and dateutil libraries or datetime.timezone
objects from the standard library.
Working with time zones#
By default, pandas objects are time zone unaware:
In [431]: rng = pd.date_range("3/6/2012 00:00", periods=15, freq="D")
In [432]: rng.tz is None
Out[432]: True
To localize these dates to a time zone (assign a particular time zone to a naive date),
you can use the tz_localize method or the tz keyword argument in
date_range(), Timestamp, or DatetimeIndex.
You can either pass pytz or dateutil time zone objects or Olson time zone database strings.
Olson time zone strings will return pytz time zone objects by default.
To return dateutil time zone objects, append dateutil/ before the string.
In pytz you can find a list of common (and less common) time zones using
from pytz import common_timezones, all_timezones.
dateutil uses the OS time zones so there isn’t a fixed list available. For
common zones, the names are the same as pytz.
In [433]: import dateutil
# pytz
In [434]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz="Europe/London")
In [435]: rng_pytz.tz
Out[435]: <DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD>
# dateutil
In [436]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [437]: rng_dateutil = rng_dateutil.tz_localize("dateutil/Europe/London")
In [438]: rng_dateutil.tz
Out[438]: tzfile('/usr/share/zoneinfo/Europe/London')
# dateutil - utc special case
In [439]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=dateutil.tz.tzutc(),
.....: )
.....:
In [440]: rng_utc.tz
Out[440]: tzutc()
New in version 0.25.0.
# datetime.timezone
In [441]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=datetime.timezone.utc,
.....: )
.....:
In [442]: rng_utc.tz
Out[442]: datetime.timezone.utc
Note that the UTC time zone is a special case in dateutil and should be constructed explicitly
as an instance of dateutil.tz.tzutc. You can also construct other time
zones objects explicitly first.
In [443]: import pytz
# pytz
In [444]: tz_pytz = pytz.timezone("Europe/London")
In [445]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [446]: rng_pytz = rng_pytz.tz_localize(tz_pytz)
In [447]: rng_pytz.tz == tz_pytz
Out[447]: True
# dateutil
In [448]: tz_dateutil = dateutil.tz.gettz("Europe/London")
In [449]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz=tz_dateutil)
In [450]: rng_dateutil.tz == tz_dateutil
Out[450]: True
To convert a time zone aware pandas object from one time zone to another,
you can use the tz_convert method.
In [451]: rng_pytz.tz_convert("US/Eastern")
Out[451]:
DatetimeIndex(['2012-03-05 19:00:00-05:00', '2012-03-06 19:00:00-05:00',
'2012-03-07 19:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Note
When using pytz time zones, DatetimeIndex will construct a different
time zone object than a Timestamp for the same time zone input. A DatetimeIndex
can hold a collection of Timestamp objects that may have different UTC offsets and cannot be
succinctly represented by one pytz time zone instance while one Timestamp
represents one point in time with a specific UTC offset.
In [452]: dti = pd.date_range("2019-01-01", periods=3, freq="D", tz="US/Pacific")
In [453]: dti.tz
Out[453]: <DstTzInfo 'US/Pacific' LMT-1 day, 16:07:00 STD>
In [454]: ts = pd.Timestamp("2019-01-01", tz="US/Pacific")
In [455]: ts.tz
Out[455]: <DstTzInfo 'US/Pacific' PST-1 day, 16:00:00 STD>
Warning
Be wary of conversions between libraries. For some time zones, pytz and dateutil have different
definitions of the zone. This is more of a problem for unusual time zones than for
‘standard’ zones like US/Eastern.
Warning
Be aware that a time zone definition across versions of time zone libraries may not
be considered equal. This may cause problems when working with stored data that
is localized using one version and operated on with a different version.
See here for how to handle such a situation.
Warning
For pytz time zones, it is incorrect to pass a time zone object directly into
the datetime.datetime constructor
(e.g., datetime.datetime(2011, 1, 1, tzinfo=pytz.timezone('US/Eastern')).
Instead, the datetime needs to be localized using the localize method
on the pytz time zone object.
Warning
Be aware that for times in the future, correct conversion between time zones
(and UTC) cannot be guaranteed by any time zone library because a timezone’s
offset from UTC may be changed by the respective government.
Warning
If you are using dates beyond 2038-01-18, due to current deficiencies
in the underlying libraries caused by the year 2038 problem, daylight saving time (DST) adjustments
to timezone aware dates will not be applied. If and when the underlying libraries are fixed,
the DST transitions will be applied.
For example, for two dates that are in British Summer Time (and so would normally be GMT+1), both the following asserts evaluate as true:
In [456]: d_2037 = "2037-03-31T010101"
In [457]: d_2038 = "2038-03-31T010101"
In [458]: DST = "Europe/London"
In [459]: assert pd.Timestamp(d_2037, tz=DST) != pd.Timestamp(d_2037, tz="GMT")
In [460]: assert pd.Timestamp(d_2038, tz=DST) == pd.Timestamp(d_2038, tz="GMT")
Under the hood, all timestamps are stored in UTC. Values from a time zone aware
DatetimeIndex or Timestamp will have their fields (day, hour, minute, etc.)
localized to the time zone. However, timestamps with the same UTC value are
still considered to be equal even if they are in different time zones:
In [461]: rng_eastern = rng_utc.tz_convert("US/Eastern")
In [462]: rng_berlin = rng_utc.tz_convert("Europe/Berlin")
In [463]: rng_eastern[2]
Out[463]: Timestamp('2012-03-07 19:00:00-0500', tz='US/Eastern', freq='D')
In [464]: rng_berlin[2]
Out[464]: Timestamp('2012-03-08 01:00:00+0100', tz='Europe/Berlin', freq='D')
In [465]: rng_eastern[2] == rng_berlin[2]
Out[465]: True
Operations between Series in different time zones will yield UTC
Series, aligning the data on the UTC timestamps:
In [466]: ts_utc = pd.Series(range(3), pd.date_range("20130101", periods=3, tz="UTC"))
In [467]: eastern = ts_utc.tz_convert("US/Eastern")
In [468]: berlin = ts_utc.tz_convert("Europe/Berlin")
In [469]: result = eastern + berlin
In [470]: result
Out[470]:
2013-01-01 00:00:00+00:00 0
2013-01-02 00:00:00+00:00 2
2013-01-03 00:00:00+00:00 4
Freq: D, dtype: int64
In [471]: result.index
Out[471]:
DatetimeIndex(['2013-01-01 00:00:00+00:00', '2013-01-02 00:00:00+00:00',
'2013-01-03 00:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='D')
To remove time zone information, use tz_localize(None) or tz_convert(None).
tz_localize(None) will remove the time zone yielding the local time representation.
tz_convert(None) will remove the time zone after converting to UTC time.
In [472]: didx = pd.date_range(start="2014-08-01 09:00", freq="H", periods=3, tz="US/Eastern")
In [473]: didx
Out[473]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [474]: didx.tz_localize(None)
Out[474]:
DatetimeIndex(['2014-08-01 09:00:00', '2014-08-01 10:00:00',
'2014-08-01 11:00:00'],
dtype='datetime64[ns]', freq=None)
In [475]: didx.tz_convert(None)
Out[475]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq='H')
# tz_convert(None) is identical to tz_convert('UTC').tz_localize(None)
In [476]: didx.tz_convert("UTC").tz_localize(None)
Out[476]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq=None)
Fold#
New in version 1.1.0.
For ambiguous times, pandas supports explicitly specifying the keyword-only fold argument.
Due to daylight saving time, one wall clock time can occur twice when shifting
from summer to winter time; fold describes whether the datetime-like corresponds
to the first (0) or the second time (1) the wall clock hits the ambiguous time.
Fold is supported only for constructing from naive datetime.datetime
(see datetime documentation for details) or from Timestamp
or for constructing from components (see below). Only dateutil timezones are supported
(see dateutil documentation
for dateutil methods that deal with ambiguous datetimes) as pytz
timezones do not support fold (see pytz documentation
for details on how pytz deals with ambiguous datetimes). To localize an ambiguous datetime
with pytz, please use Timestamp.tz_localize(). In general, we recommend to rely
on Timestamp.tz_localize() when localizing ambiguous datetimes if you need direct
control over how they are handled.
In [477]: pd.Timestamp(
.....: datetime.datetime(2019, 10, 27, 1, 30, 0, 0),
.....: tz="dateutil/Europe/London",
.....: fold=0,
.....: )
.....:
Out[477]: Timestamp('2019-10-27 01:30:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London')
In [478]: pd.Timestamp(
.....: year=2019,
.....: month=10,
.....: day=27,
.....: hour=1,
.....: minute=30,
.....: tz="dateutil/Europe/London",
.....: fold=1,
.....: )
.....:
Out[478]: Timestamp('2019-10-27 01:30:00+0000', tz='dateutil//usr/share/zoneinfo/Europe/London')
Ambiguous times when localizing#
tz_localize may not be able to determine the UTC offset of a timestamp
because daylight savings time (DST) in a local time zone causes some times to occur
twice within one day (“clocks fall back”). The following options are available:
'raise': Raises a pytz.AmbiguousTimeError (the default behavior)
'infer': Attempt to determine the correct offset base on the monotonicity of the timestamps
'NaT': Replaces ambiguous times with NaT
bool: True represents a DST time, False represents non-DST time. An array-like of bool values is supported for a sequence of times.
In [479]: rng_hourly = pd.DatetimeIndex(
.....: ["11/06/2011 00:00", "11/06/2011 01:00", "11/06/2011 01:00", "11/06/2011 02:00"]
.....: )
.....:
This will fail as there are ambiguous times ('11/06/2011 01:00')
In [2]: rng_hourly.tz_localize('US/Eastern')
AmbiguousTimeError: Cannot infer dst time from Timestamp('2011-11-06 01:00:00'), try using the 'ambiguous' argument
Handle these ambiguous times by specifying the following.
In [480]: rng_hourly.tz_localize("US/Eastern", ambiguous="infer")
Out[480]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [481]: rng_hourly.tz_localize("US/Eastern", ambiguous="NaT")
Out[481]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', 'NaT', 'NaT',
'2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [482]: rng_hourly.tz_localize("US/Eastern", ambiguous=[True, True, False, False])
Out[482]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Nonexistent times when localizing#
A DST transition may also shift the local time ahead by 1 hour creating nonexistent
local times (“clocks spring forward”). The behavior of localizing a timeseries with nonexistent times
can be controlled by the nonexistent argument. The following options are available:
'raise': Raises a pytz.NonExistentTimeError (the default behavior)
'NaT': Replaces nonexistent times with NaT
'shift_forward': Shifts nonexistent times forward to the closest real time
'shift_backward': Shifts nonexistent times backward to the closest real time
timedelta object: Shifts nonexistent times by the timedelta duration
In [483]: dti = pd.date_range(start="2015-03-29 02:30:00", periods=3, freq="H")
# 2:30 is a nonexistent time
Localization of nonexistent times will raise an error by default.
In [2]: dti.tz_localize('Europe/Warsaw')
NonExistentTimeError: 2015-03-29 02:30:00
Transform nonexistent times to NaT or shift the times.
In [484]: dti
Out[484]:
DatetimeIndex(['2015-03-29 02:30:00', '2015-03-29 03:30:00',
'2015-03-29 04:30:00'],
dtype='datetime64[ns]', freq='H')
In [485]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_forward")
Out[485]:
DatetimeIndex(['2015-03-29 03:00:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [486]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_backward")
Out[486]:
DatetimeIndex(['2015-03-29 01:59:59.999999999+01:00',
'2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [487]: dti.tz_localize("Europe/Warsaw", nonexistent=pd.Timedelta(1, unit="H"))
Out[487]:
DatetimeIndex(['2015-03-29 03:30:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [488]: dti.tz_localize("Europe/Warsaw", nonexistent="NaT")
Out[488]:
DatetimeIndex(['NaT', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
Time zone Series operations#
A Series with time zone naive values is
represented with a dtype of datetime64[ns].
In [489]: s_naive = pd.Series(pd.date_range("20130101", periods=3))
In [490]: s_naive
Out[490]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
dtype: datetime64[ns]
A Series with a time zone aware values is
represented with a dtype of datetime64[ns, tz] where tz is the time zone
In [491]: s_aware = pd.Series(pd.date_range("20130101", periods=3, tz="US/Eastern"))
In [492]: s_aware
Out[492]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Both of these Series time zone information
can be manipulated via the .dt accessor, see the dt accessor section.
For example, to localize and convert a naive stamp to time zone aware.
In [493]: s_naive.dt.tz_localize("UTC").dt.tz_convert("US/Eastern")
Out[493]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Time zone information can also be manipulated using the astype method.
This method can convert between different timezone-aware dtypes.
# convert to a new time zone
In [494]: s_aware.astype("datetime64[ns, CET]")
Out[494]:
0 2013-01-01 06:00:00+01:00
1 2013-01-02 06:00:00+01:00
2 2013-01-03 06:00:00+01:00
dtype: datetime64[ns, CET]
Note
Using Series.to_numpy() on a Series, returns a NumPy array of the data.
NumPy does not currently support time zones (even though it is printing in the local time zone!),
therefore an object array of Timestamps is returned for time zone aware data:
In [495]: s_naive.to_numpy()
Out[495]:
array(['2013-01-01T00:00:00.000000000', '2013-01-02T00:00:00.000000000',
'2013-01-03T00:00:00.000000000'], dtype='datetime64[ns]')
In [496]: s_aware.to_numpy()
Out[496]:
array([Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-02 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')],
dtype=object)
By converting to an object array of Timestamps, it preserves the time zone
information. For example, when converting back to a Series:
In [497]: pd.Series(s_aware.to_numpy())
Out[497]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
However, if you want an actual NumPy datetime64[ns] array (with the values
converted to UTC) instead of an array of objects, you can specify the
dtype argument:
In [498]: s_aware.to_numpy(dtype="datetime64[ns]")
Out[498]:
array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')
| 676
| 1,085
|
Create regular time series from irregular interval with python
I wonder if is it possible to convert irregular time series interval to regular one without interpolating value from other column like this :
Index count
2018-01-05 00:00:00 1
2018-01-07 00:00:00 4
2018-01-08 00:00:00 15
2018-01-11 00:00:00 2
2018-01-14 00:00:00 5
2018-01-19 00:00:00 5
....
2018-12-26 00:00:00 6
2018-12-29 00:00:00 7
2018-12-30 00:00:00 8
And I expect the result to be something like this:
Index count
2018-01-01 00:00:00 0
2018-01-02 00:00:00 0
2018-01-03 00:00:00 0
2018-01-04 00:00:00 0
2018-01-05 00:00:00 1
2018-01-06 00:00:00 0
2018-01-07 00:00:00 4
2018-01-08 00:00:00 15
2018-01-09 00:00:00 0
2018-01-10 00:00:00 0
2018-01-11 00:00:00 2
2018-01-12 00:00:00 0
2018-01-13 00:00:00 0
2018-01-14 00:00:00 5
2018-01-15 00:00:00 0
2018-01-16 00:00:00 0
2018-01-17 00:00:00 0
2018-01-18 00:00:00 0
2018-01-19 00:00:00 5
....
2018-12-26 00:00:00 6
2018-12-27 00:00:00 0
2018-12-28 00:00:00 0
2018-12-29 00:00:00 7
2018-12-30 00:00:00 8
2018-12-31 00:00:00 0
So, far I just try resample from pandas but it only partially solved my problem.
Thanks in advance
|
65,109,993
|
Python pd.read_excel - find duplicated rows in all Excel Sheets
|
<p>I have an Excel report with a few million rows across three sheets. I try to use the below code to import the entire Excel file and all sheets and check for duplicated rows across all sheets and display all duplicated rows (except the first one).</p>
<p>If I run the code without sheet_name=None it works but it only analyses the first Sheet.</p>
<p>But when I add argument sheet_name=None hoping that all Sheets will be checked for duplicates - it doesn't work and I get an error.</p>
<pre><code>import pandas as pd
df = pd.read_excel('Dup test.xlsx', sheet_name=None)
dups=df[df.duplicated()]
print(dups)
</code></pre>
<p>Does anyone know why it's happening? And how do I check for duplicated rows in every single sheet of my Excel file - please? Thank you.</p>
<p>This is the error:</p>
<blockquote>
<p>Traceback (most recent call last): File
"\eu.ad.hertz.com\userdocs\irac920\Desktop\My Files\Python\4.py",
line 4, in
dups=df[df.duplicated()] AttributeError: 'dict' object has no attribute 'duplicated'</p>
</blockquote>
| 65,111,369
| 2020-12-02T14:24:38.443000
| 1
| null | 0
| 437
|
python|pandas
|
<p>You get the attribute error because pandas returns a dictionary instead of a DataFrame when you specify sheet_name=None.</p>
<pre><code>>>> import pandas as pd
>>> df = pd.read_excel('import-order.xlsx', sheet_name=None)
>>> type(df)
<class 'dict'>
>>> df.keys()
dict_keys(['header', 'detail', 'ps_orders', 'ps_order_detail', 'Sheet5', 'Sheet7'])
>>> type(df['header'])
<class 'pandas.core.frame.DataFrame'>
>>>
</code></pre>
| 2020-12-02T15:42:28.553000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.read_excel.html
|
pandas.read_excel#
pandas.read_excel#
pandas.read_excel(io, sheet_name=0, *, header=0, names=None, index_col=None, usecols=None, squeeze=None, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, parse_dates=False, date_parser=None, thousands=None, decimal='.', comment=None, skipfooter=0, convert_float=None, mangle_dupe_cols=True, storage_options=None)[source]#
You get the attribute error because pandas returns a dictionary instead of a DataFrame when you specify sheet_name=None.
>>> import pandas as pd
>>> df = pd.read_excel('import-order.xlsx', sheet_name=None)
>>> type(df)
<class 'dict'>
>>> df.keys()
dict_keys(['header', 'detail', 'ps_orders', 'ps_order_detail', 'Sheet5', 'Sheet7'])
>>> type(df['header'])
<class 'pandas.core.frame.DataFrame'>
>>>
Read an Excel file into a pandas DataFrame.
Supports xls, xlsx, xlsm, xlsb, odf, ods and odt file extensions
read from a local filesystem or URL. Supports an option to read
a single sheet or a list of sheets.
Parameters
iostr, bytes, ExcelFile, xlrd.Book, path object, or file-like objectAny valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.xlsx.
If you want to pass in a path object, pandas accepts any os.PathLike.
By file-like object, we refer to objects with a read() method,
such as a file handle (e.g. via builtin open function)
or StringIO.
sheet_namestr, int, list, or None, default 0Strings are used for sheet names. Integers are used in zero-indexed
sheet positions (chart sheets do not count as a sheet position).
Lists of strings/integers are used to request multiple sheets.
Specify None to get all worksheets.
Available cases:
Defaults to 0: 1st sheet as a DataFrame
1: 2nd sheet as a DataFrame
"Sheet1": Load sheet with name “Sheet1”
[0, 1, "Sheet5"]: Load first, second and sheet named “Sheet5”
as a dict of DataFrame
None: All worksheets.
headerint, list of int, default 0Row (0-indexed) to use for the column labels of the parsed
DataFrame. If a list of integers is passed those row positions will
be combined into a MultiIndex. Use None if there is no header.
namesarray-like, default NoneList of column names to use. If file contains no header row,
then you should explicitly pass header=None.
index_colint, list of int, default NoneColumn (0-indexed) to use as the row labels of the DataFrame.
Pass None if there is no such column. If a list is passed,
those columns will be combined into a MultiIndex. If a
subset of data is selected with usecols, index_col
is based on the subset.
Missing values will be forward filled to allow roundtripping with
to_excel for merged_cells=True. To avoid forward filling the
missing values use set_index after reading the data instead of
index_col.
usecolsstr, list-like, or callable, default None
If None, then parse all columns.
If str, then indicates comma separated list of Excel column letters
and column ranges (e.g. “A:E” or “A,C,E:F”). Ranges are inclusive of
both sides.
If list of int, then indicates list of column numbers to be parsed
(0-indexed).
If list of string, then indicates list of column names to be parsed.
If callable, then evaluate each column name against it and parse the
column if the callable returns True.
Returns a subset of the columns according to behavior above.
squeezebool, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to read_excel to squeeze
the data.
dtypeType name or dict of column -> type, default NoneData type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32}
Use object to preserve data as stored in Excel and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
enginestr, default NoneIf io is not a buffer or path, this must be set to identify io.
Supported engines: “xlrd”, “openpyxl”, “odf”, “pyxlsb”.
Engine compatibility :
“xlrd” supports old-style Excel files (.xls).
“openpyxl” supports newer Excel file formats.
“odf” supports OpenDocument file formats (.odf, .ods, .odt).
“pyxlsb” supports Binary Excel files.
Changed in version 1.2.0: The engine xlrd
now only supports old-style .xls files.
When engine=None, the following logic will be
used to determine the engine:
If path_or_buffer is an OpenDocument format (.odf, .ods, .odt),
then odf will be used.
Otherwise if path_or_buffer is an xls format,
xlrd will be used.
Otherwise if path_or_buffer is in xlsb format,
pyxlsb will be used.
New in version 1.3.0.
Otherwise openpyxl will be used.
Changed in version 1.3.0.
convertersdict, default NoneDict of functions for converting values in certain columns. Keys can
either be integers or column labels, values are functions that take one
input argument, the Excel cell content, and return the transformed
content.
true_valueslist, default NoneValues to consider as True.
false_valueslist, default NoneValues to consider as False.
skiprowslist-like, int, or callable, optionalLine numbers to skip (0-indexed) or number of lines to skip (int) at the
start of the file. If callable, the callable function will be evaluated
against the row indices, returning True if the row should be skipped and
False otherwise. An example of a valid callable argument would be lambda
x: x in [0, 2].
nrowsint, default NoneNumber of rows to parse.
na_valuesscalar, str, list-like, or dict, default NoneAdditional strings to recognize as NA/NaN. If dict passed, specific
per-column NA values. By default the following values are interpreted
as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’,
‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’,
‘nan’, ‘null’.
keep_default_nabool, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterbool, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verbosebool, default FalseIndicate number of NA values placed in non-numeric columns.
parse_datesbool, list-like, or dict, default FalseThe behavior is as follows:
bool. If True -> try parsing the index.
list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
each as a separate date column.
list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as
a single date column.
dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call
result ‘foo’
If a column or index contains an unparsable date, the entire column or
index will be returned unaltered as an object data type. If you don`t want to
parse some cells as date just change their type in Excel to “Text”.
For non-standard datetime parsing, use pd.to_datetime after pd.read_excel.
Note: A fast-path exists for iso8601-formatted dates.
date_parserfunction, optionalFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. Pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by parse_dates) as arguments; 2) concatenate (row-wise) the
string values from the columns defined by parse_dates into a single array
and pass that; and 3) call date_parser once for each row using one or
more strings (corresponding to the columns defined by parse_dates) as
arguments.
thousandsstr, default NoneThousands separator for parsing string columns to numeric. Note that
this parameter is only necessary for columns stored as TEXT in Excel,
any numeric columns will automatically be parsed, regardless of display
format.
decimalstr, default ‘.’Character to recognize as decimal point for parsing string columns to numeric.
Note that this parameter is only necessary for columns stored as TEXT in Excel,
any numeric columns will automatically be parsed, regardless of display
format.(e.g. use ‘,’ for European data).
New in version 1.4.0.
commentstr, default NoneComments out remainder of line. Pass a character or characters to this
argument to indicate comments in the input file. Any data between the
comment string and the end of the current line is ignored.
skipfooterint, default 0Rows at the end to skip (0-indexed).
convert_floatbool, default TrueConvert integral floats to int (i.e., 1.0 –> 1). If False, all numeric
data will be read in as floats: Excel stores all numbers as floats
internally.
Deprecated since version 1.3.0: convert_float will be removed in a future version
mangle_dupe_colsbool, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than
‘X’…’X’. Passing in False will cause data to be overwritten if there
are duplicate names in the columns.
Deprecated since version 1.5.0: Not implemented, and a new argument to specify the pattern for the
names of duplicated columns will be added instead
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
Returns
DataFrame or dict of DataFramesDataFrame from the passed in Excel file. See notes in sheet_name
argument for more information on when a dict of DataFrames is returned.
See also
DataFrame.to_excelWrite DataFrame to an Excel file.
DataFrame.to_csvWrite DataFrame to a comma-separated values (csv) file.
read_csvRead a comma-separated values (csv) file into DataFrame.
read_fwfRead a table of fixed-width formatted lines into DataFrame.
Examples
The file can be read using the file name as string or an open file object:
>>> pd.read_excel('tmp.xlsx', index_col=0)
Name Value
0 string1 1
1 string2 2
2 #Comment 3
>>> pd.read_excel(open('tmp.xlsx', 'rb'),
... sheet_name='Sheet3')
Unnamed: 0 Name Value
0 0 string1 1
1 1 string2 2
2 2 #Comment 3
Index and header can be specified via the index_col and header arguments
>>> pd.read_excel('tmp.xlsx', index_col=None, header=None)
0 1 2
0 NaN Name Value
1 0.0 string1 1
2 1.0 string2 2
3 2.0 #Comment 3
Column types are inferred but can be explicitly specified
>>> pd.read_excel('tmp.xlsx', index_col=0,
... dtype={'Name': str, 'Value': float})
Name Value
0 string1 1.0
1 string2 2.0
2 #Comment 3.0
True, False, and NA values, and thousands separators have defaults,
but can be explicitly specified, too. Supply the values you would like
as strings or lists of strings!
>>> pd.read_excel('tmp.xlsx', index_col=0,
... na_values=['string1', 'string2'])
Name Value
0 NaN 1
1 NaN 2
2 #Comment 3
Comment lines in the excel input file can be skipped using the comment kwarg
>>> pd.read_excel('tmp.xlsx', index_col=0, comment='#')
Name Value
0 string1 1.0
1 string2 2.0
2 None NaN
| 490
| 887
|
Python pd.read_excel - find duplicated rows in all Excel Sheets
I have an Excel report with a few million rows across three sheets. I try to use the below code to import the entire Excel file and all sheets and check for duplicated rows across all sheets and display all duplicated rows (except the first one).
If I run the code without sheet_name=None it works but it only analyses the first Sheet.
But when I add argument sheet_name=None hoping that all Sheets will be checked for duplicates - it doesn't work and I get an error.
import pandas as pd
df = pd.read_excel('Dup test.xlsx', sheet_name=None)
dups=df[df.duplicated()]
print(dups)
Does anyone know why it's happening? And how do I check for duplicated rows in every single sheet of my Excel file - please? Thank you.
This is the error:
Traceback (most recent call last): File
"\eu.ad.hertz.com\userdocs\irac920\Desktop\My Files\Python\4.py",
line 4, in
dups=df[df.duplicated()] AttributeError: 'dict' object has no attribute 'duplicated'
|
61,045,087
|
Pandas: Dataframe diff() with reset of value accounted for
|
<p>I want to get delta between next values in dataframe, data supposed to by monotonically increasing but it's sometimes being resetted. And to account for that if value of diff is negative I want to not change it:</p>
<p>let's say I've got dataframe like:</p>
<pre><code>val:
1
2
3
1
2
4
</code></pre>
<p>then after diff().fillna(0) operation I've got</p>
<pre><code>val:
0
1
1
-2
1
2
</code></pre>
<p>But I would like to:</p>
<pre><code>val:
0
1
1
1
1
2
</code></pre>
<p>Any easy way of doing it?</p>
| 61,045,253
| 2020-04-05T15:36:15.817000
| 1
| null | 1
| 184
|
python|pandas
|
<p>You could take the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.diff.html" rel="nofollow noreferrer"><code>diff</code></a>, then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.where.html" rel="nofollow noreferrer"><code>where</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ffill.html" rel="nofollow noreferrer"><code>ffill</code></a> to replace those negatives with the previous values:</p>
<pre><code>d = df.name.diff()
d.where(d.gt(0), df.name.ffill())
0 1.0
1 1.0
2 1.0
3 1.0
4 1.0
5 2.0
Name: name, dtype: float64
</code></pre>
| 2020-04-05T15:50:35.880000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.reset_index.html
|
pandas.Series.reset_index#
pandas.Series.reset_index#
Series.reset_index(level=None, *, drop=False, name=_NoDefault.no_default, inplace=False, allow_duplicates=False)[source]#
You could take the diff, then use where with ffill to replace those negatives with the previous values:
d = df.name.diff()
d.where(d.gt(0), df.name.ffill())
0 1.0
1 1.0
2 1.0
3 1.0
4 1.0
5 2.0
Name: name, dtype: float64
Generate a new DataFrame or Series with the index reset.
This is useful when the index needs to be treated as a column, or
when the index is meaningless and needs to be reset to the default
before another operation.
Parameters
levelint, str, tuple, or list, default optionalFor a Series with a MultiIndex, only remove the specified levels
from the index. Removes all levels by default.
dropbool, default FalseJust reset the index, without inserting it as a column in
the new DataFrame.
nameobject, optionalThe name to use for the column containing the original Series
values. Uses self.name by default. This argument is ignored
when drop is True.
inplacebool, default FalseModify the Series in place (do not create a new object).
allow_duplicatesbool, default FalseAllow duplicate column labels to be created.
New in version 1.5.0.
Returns
Series or DataFrame or NoneWhen drop is False (the default), a DataFrame is returned.
The newly created columns will come first in the DataFrame,
followed by the original Series values.
When drop is True, a Series is returned.
In either case, if inplace=True, no value is returned.
See also
DataFrame.reset_indexAnalogous function for DataFrame.
Examples
>>> s = pd.Series([1, 2, 3, 4], name='foo',
... index=pd.Index(['a', 'b', 'c', 'd'], name='idx'))
Generate a DataFrame with default index.
>>> s.reset_index()
idx foo
0 a 1
1 b 2
2 c 3
3 d 4
To specify the name of the new column use name.
>>> s.reset_index(name='values')
idx values
0 a 1
1 b 2
2 c 3
3 d 4
To generate a new Series with the default set drop to True.
>>> s.reset_index(drop=True)
0 1
1 2
2 3
3 4
Name: foo, dtype: int64
To update the Series in place, without generating a new one
set inplace to True. Note that it also requires drop=True.
>>> s.reset_index(inplace=True, drop=True)
>>> s
0 1
1 2
2 3
3 4
Name: foo, dtype: int64
The level parameter is interesting for Series with a multi-level
index.
>>> arrays = [np.array(['bar', 'bar', 'baz', 'baz']),
... np.array(['one', 'two', 'one', 'two'])]
>>> s2 = pd.Series(
... range(4), name='foo',
... index=pd.MultiIndex.from_arrays(arrays,
... names=['a', 'b']))
To remove a specific level from the Index, use level.
>>> s2.reset_index(level='a')
a foo
b
one bar 0
two bar 1
one baz 2
two baz 3
If level is not set, all levels are removed from the Index.
>>> s2.reset_index()
a b foo
0 bar one 0
1 bar two 1
2 baz one 2
3 baz two 3
| 180
| 419
|
Pandas: Dataframe diff() with reset of value accounted for
I want to get delta between next values in dataframe, data supposed to by monotonically increasing but it's sometimes being resetted. And to account for that if value of diff is negative I want to not change it:
let's say I've got dataframe like:
val:
1
2
3
1
2
4
then after diff().fillna(0) operation I've got
val:
0
1
1
-2
1
2
But I would like to:
val:
0
1
1
1
1
2
Any easy way of doing it?
|
63,403,660
|
pandas find the nearest variables index from multiple columns
|
<p>I have a sorted DataFrame that looks like this,</p>
<pre><code>df =
0 1
0 -0.3 -0.2
1 -0.1 -0.1
2 0.4 0.2
3 0.7 0.5
4 2.0 0.8
</code></pre>
<p>and another DataFrame that looks like this,</p>
<pre><code>df1 =
0 1
0 0.5 -0.1
</code></pre>
<p>I would like the variables in <code>df1</code> to find their nearest respective value in <code>df</code> and write their index. The expected outlook would be something like this</p>
<pre><code>df2 =
0 1
0 0.5 -0.1
1 2 1
</code></pre>
<p>Would some type of comprehensions work? Having a lot of trouble with this.</p>
| 63,403,726
| 2020-08-13T21:46:19.500000
| 2
| 0
| 1
| 189
|
python|pandas
|
<p>Try with <code>sub</code> and <code>idxmin</code></p>
<pre><code>s=df.sub(df1.values).abs().idxmin()
Out[38]:
0 2
1 1
dtype: int64
</code></pre>
| 2020-08-13T21:52:24.857000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.quantile.html
|
pandas.DataFrame.quantile#
pandas.DataFrame.quantile#
DataFrame.quantile(q=0.5, axis=0, numeric_only=_NoDefault.no_default, interpolation='linear', method='single')[source]#
Return values at the given quantile over requested axis.
Parameters
qfloat or array-like, default 0.5 (50% quantile)Value between 0 <= q <= 1, the quantile(s) to compute.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Equals 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.
numeric_onlybool, default TrueIf False, the quantile of datetime and timedelta data will be
computed as well.
Deprecated since version 1.5.0: The default value of numeric_only will be False in a future
version of pandas.
interpolation{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points i and j:
linear: i + (j - i) * fraction, where fraction is the
fractional part of the index surrounded by i and j.
lower: i.
Try with sub and idxmin
s=df.sub(df1.values).abs().idxmin()
Out[38]:
0 2
1 1
dtype: int64
higher: j.
nearest: i or j whichever is nearest.
midpoint: (i + j) / 2.
method{‘single’, ‘table’}, default ‘single’Whether to compute quantiles per-column (‘single’) or over all columns
(‘table’). When ‘table’, the only allowed interpolation methods are
‘nearest’, ‘lower’, and ‘higher’.
Returns
Series or DataFrame
If q is an array, a DataFrame will be returned where theindex is q, the columns are the columns of self, and the
values are the quantiles.
If q is a float, a Series will be returned where theindex is the columns of self and the values are the quantiles.
See also
core.window.rolling.Rolling.quantileRolling quantile.
numpy.percentileNumpy function to compute the percentile.
Examples
>>> df = pd.DataFrame(np.array([[1, 1], [2, 10], [3, 100], [4, 100]]),
... columns=['a', 'b'])
>>> df.quantile(.1)
a 1.3
b 3.7
Name: 0.1, dtype: float64
>>> df.quantile([.1, .5])
a b
0.1 1.3 3.7
0.5 2.5 55.0
Specifying method=’table’ will compute the quantile over all columns.
>>> df.quantile(.1, method="table", interpolation="nearest")
a 1
b 1
Name: 0.1, dtype: int64
>>> df.quantile([.1, .5], method="table", interpolation="nearest")
a b
0.1 1 1
0.5 3 100
Specifying numeric_only=False will also compute the quantile of
datetime and timedelta data.
>>> df = pd.DataFrame({'A': [1, 2],
... 'B': [pd.Timestamp('2010'),
... pd.Timestamp('2011')],
... 'C': [pd.Timedelta('1 days'),
... pd.Timedelta('2 days')]})
>>> df.quantile(0.5, numeric_only=False)
A 1.5
B 2010-07-02 12:00:00
C 1 days 12:00:00
Name: 0.5, dtype: object
| 1,001
| 1,098
|
pandas find the nearest variables index from multiple columns
I have a sorted DataFrame that looks like this,
df =
0 1
0 -0.3 -0.2
1 -0.1 -0.1
2 0.4 0.2
3 0.7 0.5
4 2.0 0.8
and another DataFrame that looks like this,
df1 =
0 1
0 0.5 -0.1
I would like the variables in df1 to find their nearest respective value in df and write their index. The expected outlook would be something like this
df2 =
0 1
0 0.5 -0.1
1 2 1
Would some type of comprehensions work? Having a lot of trouble with this.
|
69,259,365
|
Pandas- How to work with pandas .dt.time timestamps
|
<p>I have a dataframe df_to_db:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>enter_dt_tm</th>
<th>exit_dt_tm</th>
<th>total_time</th>
<th>08:00-23:00</th>
</tr>
</thead>
<tbody>
<tr>
<td>2021-07-20 13:24:00</td>
<td>2021-07-20 22:51:00</td>
<td>0 days 09:27:00</td>
<td>0 days 09:27:00</td>
</tr>
<tr>
<td>2021-05-31 02:52:00</td>
<td>2021-05-31 06:16:00</td>
<td>0 days 03:24:00</td>
<td>0 days 00:00:00</td>
</tr>
<tr>
<td>2021-06-01 23:50:00</td>
<td>2021-06-01 23:58:00</td>
<td>0 days 00:08:00</td>
<td>0 days 00:00:00</td>
</tr>
<tr>
<td>2021-06-01 23:58:00</td>
<td>2021-06-02 07:58:00</td>
<td>0 days 08:00:00</td>
<td>0 days 00:00:00</td>
</tr>
<tr>
<td>2021-06-01 07:59:00</td>
<td>2021-06-01 23:12:00</td>
<td>0 days 15:13:00</td>
<td>0 days 15:00:00</td>
</tr>
</tbody>
</table>
</div>
<p>And I want to print only the rows where there are greater than 15 minutes between "08:00-23:00" AND the "enter_dt_tm" timestamp is not between 07:40:00 am and 08:00:00 am (this would eliminate all but the first row).</p>
<p>I am writing the code as follows but am having issues:</p>
<pre><code>test_df = df_to_db[df_to_db['08:00-23:00'] >= '0 days 00:15:00' & df_to_db['enter_dt_tm'].dt.time <= '07:40:00' & df_to_db['enter_dt_tm'].dt.time >= '08:00:00']
print(test_df)
</code></pre>
<p>The first portion where I specify 08:00-23:00 is greater than 15 minutes works, but I can't seem to figure out how to work with .dt.time to get my code to work. I have tried writing it in different formats with no luck (e.g. (7, 40, 0), 7, 40, 0) etc.</p>
<p>Any help would be greatly appreciated.</p>
<p><strong>EDIT: The error I am receiving is: TypeError: '<=' not supported between instances of 'datetime.time' and 'str'</strong></p>
| 69,259,753
| 2021-09-20T18:39:39.960000
| 3
| null | 1
| 222
|
python|pandas
|
<p>You can use:</p>
<pre><code>import datetime
import pandas as pd
import numpy as np
# convert `enter_dt_tm` to datetime format
df_to_db['enter_dt_tm'] = pd.to_datetime(df_to_db['enter_dt_tm'])
# convert `08:00-23:00` to timedelta format
df_to_db['08:00-23:00'] = pd.to_timedelta(df_to_db['08:00-23:00'])
# set boolean mask for `08:00-23:00` >= 15 minutes
m1 = df_to_db['08:00-23:00'] >= np.timedelta64(15,'m')
# set boolean mask for `enter_dt_tm` NOT between 07:40:00 am and 08:00:00 am
m2 = ~ df_to_db['enter_dt_tm'].dt.time.between(datetime.time(7, 40, 0), datetime.time(8, 0, 0))
# filter by both boolean masks
df_to_db[m1 & m2]
</code></pre>
<p><strong>Result:</strong></p>
<pre><code> enter_dt_tm exit_dt_tm total_time 08:00-23:00
0 2021-07-20 13:24:00 2021-07-20 22:51:00 0 days 09:27:00 0 days 09:27:00
</code></pre>
| 2021-09-20T19:18:05.867000
| 2
|
https://pandas.pydata.org/docs/user_guide/timeseries.html
|
Time series / date functionality#
Time series / date functionality#
pandas contains extensive capabilities and features for working with time series data for all domains.
Using the NumPy datetime64 and timedelta64 dtypes, pandas has consolidated a large number of
features from other Python libraries like scikits.timeseries as well as created
You can use:
import datetime
import pandas as pd
import numpy as np
# convert `enter_dt_tm` to datetime format
df_to_db['enter_dt_tm'] = pd.to_datetime(df_to_db['enter_dt_tm'])
# convert `08:00-23:00` to timedelta format
df_to_db['08:00-23:00'] = pd.to_timedelta(df_to_db['08:00-23:00'])
# set boolean mask for `08:00-23:00` >= 15 minutes
m1 = df_to_db['08:00-23:00'] >= np.timedelta64(15,'m')
# set boolean mask for `enter_dt_tm` NOT between 07:40:00 am and 08:00:00 am
m2 = ~ df_to_db['enter_dt_tm'].dt.time.between(datetime.time(7, 40, 0), datetime.time(8, 0, 0))
# filter by both boolean masks
df_to_db[m1 & m2]
Result:
enter_dt_tm exit_dt_tm total_time 08:00-23:00
0 2021-07-20 13:24:00 2021-07-20 22:51:00 0 days 09:27:00 0 days 09:27:00
a tremendous amount of new functionality for manipulating time series data.
For example, pandas supports:
Parsing time series information from various sources and formats
In [1]: import datetime
In [2]: dti = pd.to_datetime(
...: ["1/1/2018", np.datetime64("2018-01-01"), datetime.datetime(2018, 1, 1)]
...: )
...:
In [3]: dti
Out[3]: DatetimeIndex(['2018-01-01', '2018-01-01', '2018-01-01'], dtype='datetime64[ns]', freq=None)
Generate sequences of fixed-frequency dates and time spans
In [4]: dti = pd.date_range("2018-01-01", periods=3, freq="H")
In [5]: dti
Out[5]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 01:00:00',
'2018-01-01 02:00:00'],
dtype='datetime64[ns]', freq='H')
Manipulating and converting date times with timezone information
In [6]: dti = dti.tz_localize("UTC")
In [7]: dti
Out[7]:
DatetimeIndex(['2018-01-01 00:00:00+00:00', '2018-01-01 01:00:00+00:00',
'2018-01-01 02:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='H')
In [8]: dti.tz_convert("US/Pacific")
Out[8]:
DatetimeIndex(['2017-12-31 16:00:00-08:00', '2017-12-31 17:00:00-08:00',
'2017-12-31 18:00:00-08:00'],
dtype='datetime64[ns, US/Pacific]', freq='H')
Resampling or converting a time series to a particular frequency
In [9]: idx = pd.date_range("2018-01-01", periods=5, freq="H")
In [10]: ts = pd.Series(range(len(idx)), index=idx)
In [11]: ts
Out[11]:
2018-01-01 00:00:00 0
2018-01-01 01:00:00 1
2018-01-01 02:00:00 2
2018-01-01 03:00:00 3
2018-01-01 04:00:00 4
Freq: H, dtype: int64
In [12]: ts.resample("2H").mean()
Out[12]:
2018-01-01 00:00:00 0.5
2018-01-01 02:00:00 2.5
2018-01-01 04:00:00 4.0
Freq: 2H, dtype: float64
Performing date and time arithmetic with absolute or relative time increments
In [13]: friday = pd.Timestamp("2018-01-05")
In [14]: friday.day_name()
Out[14]: 'Friday'
# Add 1 day
In [15]: saturday = friday + pd.Timedelta("1 day")
In [16]: saturday.day_name()
Out[16]: 'Saturday'
# Add 1 business day (Friday --> Monday)
In [17]: monday = friday + pd.offsets.BDay()
In [18]: monday.day_name()
Out[18]: 'Monday'
pandas provides a relatively compact and self-contained set of tools for
performing the above tasks and more.
Overview#
pandas captures 4 general time related concepts:
Date times: A specific date and time with timezone support. Similar to datetime.datetime from the standard library.
Time deltas: An absolute time duration. Similar to datetime.timedelta from the standard library.
Time spans: A span of time defined by a point in time and its associated frequency.
Date offsets: A relative time duration that respects calendar arithmetic. Similar to dateutil.relativedelta.relativedelta from the dateutil package.
Concept
Scalar Class
Array Class
pandas Data Type
Primary Creation Method
Date times
Timestamp
DatetimeIndex
datetime64[ns] or datetime64[ns, tz]
to_datetime or date_range
Time deltas
Timedelta
TimedeltaIndex
timedelta64[ns]
to_timedelta or timedelta_range
Time spans
Period
PeriodIndex
period[freq]
Period or period_range
Date offsets
DateOffset
None
None
DateOffset
For time series data, it’s conventional to represent the time component in the index of a Series or DataFrame
so manipulations can be performed with respect to the time element.
In [19]: pd.Series(range(3), index=pd.date_range("2000", freq="D", periods=3))
Out[19]:
2000-01-01 0
2000-01-02 1
2000-01-03 2
Freq: D, dtype: int64
However, Series and DataFrame can directly also support the time component as data itself.
In [20]: pd.Series(pd.date_range("2000", freq="D", periods=3))
Out[20]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
Series and DataFrame have extended data type support and functionality for datetime, timedelta
and Period data when passed into those constructors. DateOffset
data however will be stored as object data.
In [21]: pd.Series(pd.period_range("1/1/2011", freq="M", periods=3))
Out[21]:
0 2011-01
1 2011-02
2 2011-03
dtype: period[M]
In [22]: pd.Series([pd.DateOffset(1), pd.DateOffset(2)])
Out[22]:
0 <DateOffset>
1 <2 * DateOffsets>
dtype: object
In [23]: pd.Series(pd.date_range("1/1/2011", freq="M", periods=3))
Out[23]:
0 2011-01-31
1 2011-02-28
2 2011-03-31
dtype: datetime64[ns]
Lastly, pandas represents null date times, time deltas, and time spans as NaT which
is useful for representing missing or null date like values and behaves similar
as np.nan does for float data.
In [24]: pd.Timestamp(pd.NaT)
Out[24]: NaT
In [25]: pd.Timedelta(pd.NaT)
Out[25]: NaT
In [26]: pd.Period(pd.NaT)
Out[26]: NaT
# Equality acts as np.nan would
In [27]: pd.NaT == pd.NaT
Out[27]: False
Timestamps vs. time spans#
Timestamped data is the most basic type of time series data that associates
values with points in time. For pandas objects it means using the points in
time.
In [28]: pd.Timestamp(datetime.datetime(2012, 5, 1))
Out[28]: Timestamp('2012-05-01 00:00:00')
In [29]: pd.Timestamp("2012-05-01")
Out[29]: Timestamp('2012-05-01 00:00:00')
In [30]: pd.Timestamp(2012, 5, 1)
Out[30]: Timestamp('2012-05-01 00:00:00')
However, in many cases it is more natural to associate things like change
variables with a time span instead. The span represented by Period can be
specified explicitly, or inferred from datetime string format.
For example:
In [31]: pd.Period("2011-01")
Out[31]: Period('2011-01', 'M')
In [32]: pd.Period("2012-05", freq="D")
Out[32]: Period('2012-05-01', 'D')
Timestamp and Period can serve as an index. Lists of
Timestamp and Period are automatically coerced to DatetimeIndex
and PeriodIndex respectively.
In [33]: dates = [
....: pd.Timestamp("2012-05-01"),
....: pd.Timestamp("2012-05-02"),
....: pd.Timestamp("2012-05-03"),
....: ]
....:
In [34]: ts = pd.Series(np.random.randn(3), dates)
In [35]: type(ts.index)
Out[35]: pandas.core.indexes.datetimes.DatetimeIndex
In [36]: ts.index
Out[36]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In [37]: ts
Out[37]:
2012-05-01 0.469112
2012-05-02 -0.282863
2012-05-03 -1.509059
dtype: float64
In [38]: periods = [pd.Period("2012-01"), pd.Period("2012-02"), pd.Period("2012-03")]
In [39]: ts = pd.Series(np.random.randn(3), periods)
In [40]: type(ts.index)
Out[40]: pandas.core.indexes.period.PeriodIndex
In [41]: ts.index
Out[41]: PeriodIndex(['2012-01', '2012-02', '2012-03'], dtype='period[M]')
In [42]: ts
Out[42]:
2012-01 -1.135632
2012-02 1.212112
2012-03 -0.173215
Freq: M, dtype: float64
pandas allows you to capture both representations and
convert between them. Under the hood, pandas represents timestamps using
instances of Timestamp and sequences of timestamps using instances of
DatetimeIndex. For regular time spans, pandas uses Period objects for
scalar values and PeriodIndex for sequences of spans. Better support for
irregular intervals with arbitrary start and end points are forth-coming in
future releases.
Converting to timestamps#
To convert a Series or list-like object of date-like objects e.g. strings,
epochs, or a mixture, you can use the to_datetime function. When passed
a Series, this returns a Series (with the same index), while a list-like
is converted to a DatetimeIndex:
In [43]: pd.to_datetime(pd.Series(["Jul 31, 2009", "2010-01-10", None]))
Out[43]:
0 2009-07-31
1 2010-01-10
2 NaT
dtype: datetime64[ns]
In [44]: pd.to_datetime(["2005/11/23", "2010.12.31"])
Out[44]: DatetimeIndex(['2005-11-23', '2010-12-31'], dtype='datetime64[ns]', freq=None)
If you use dates which start with the day first (i.e. European style),
you can pass the dayfirst flag:
In [45]: pd.to_datetime(["04-01-2012 10:00"], dayfirst=True)
Out[45]: DatetimeIndex(['2012-01-04 10:00:00'], dtype='datetime64[ns]', freq=None)
In [46]: pd.to_datetime(["14-01-2012", "01-14-2012"], dayfirst=True)
Out[46]: DatetimeIndex(['2012-01-14', '2012-01-14'], dtype='datetime64[ns]', freq=None)
Warning
You see in the above example that dayfirst isn’t strict. If a date
can’t be parsed with the day being first it will be parsed as if
dayfirst were False, and in the case of parsing delimited date strings
(e.g. 31-12-2012) then a warning will also be raised.
If you pass a single string to to_datetime, it returns a single Timestamp.
Timestamp can also accept string input, but it doesn’t accept string parsing
options like dayfirst or format, so use to_datetime if these are required.
In [47]: pd.to_datetime("2010/11/12")
Out[47]: Timestamp('2010-11-12 00:00:00')
In [48]: pd.Timestamp("2010/11/12")
Out[48]: Timestamp('2010-11-12 00:00:00')
You can also use the DatetimeIndex constructor directly:
In [49]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"])
Out[49]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq=None)
The string ‘infer’ can be passed in order to set the frequency of the index as the
inferred frequency upon creation:
In [50]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"], freq="infer")
Out[50]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq='2D')
Providing a format argument#
In addition to the required datetime string, a format argument can be passed to ensure specific parsing.
This could also potentially speed up the conversion considerably.
In [51]: pd.to_datetime("2010/11/12", format="%Y/%m/%d")
Out[51]: Timestamp('2010-11-12 00:00:00')
In [52]: pd.to_datetime("12-11-2010 00:00", format="%d-%m-%Y %H:%M")
Out[52]: Timestamp('2010-11-12 00:00:00')
For more information on the choices available when specifying the format
option, see the Python datetime documentation.
Assembling datetime from multiple DataFrame columns#
You can also pass a DataFrame of integer or string columns to assemble into a Series of Timestamps.
In [53]: df = pd.DataFrame(
....: {"year": [2015, 2016], "month": [2, 3], "day": [4, 5], "hour": [2, 3]}
....: )
....:
In [54]: pd.to_datetime(df)
Out[54]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
In [55]: pd.to_datetime(df[["year", "month", "day"]])
Out[55]:
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]
pd.to_datetime looks for standard designations of the datetime component in the column names, including:
required: year, month, day
optional: hour, minute, second, millisecond, microsecond, nanosecond
Invalid data#
The default behavior, errors='raise', is to raise when unparsable:
In [2]: pd.to_datetime(['2009/07/31', 'asd'], errors='raise')
ValueError: Unknown string format
Pass errors='ignore' to return the original input when unparsable:
In [56]: pd.to_datetime(["2009/07/31", "asd"], errors="ignore")
Out[56]: Index(['2009/07/31', 'asd'], dtype='object')
Pass errors='coerce' to convert unparsable data to NaT (not a time):
In [57]: pd.to_datetime(["2009/07/31", "asd"], errors="coerce")
Out[57]: DatetimeIndex(['2009-07-31', 'NaT'], dtype='datetime64[ns]', freq=None)
Epoch timestamps#
pandas supports converting integer or float epoch times to Timestamp and
DatetimeIndex. The default unit is nanoseconds, since that is how Timestamp
objects are stored internally. However, epochs are often stored in another unit
which can be specified. These are computed from the starting point specified by the
origin parameter.
In [58]: pd.to_datetime(
....: [1349720105, 1349806505, 1349892905, 1349979305, 1350065705], unit="s"
....: )
....:
Out[58]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05',
'2012-10-12 18:15:05'],
dtype='datetime64[ns]', freq=None)
In [59]: pd.to_datetime(
....: [1349720105100, 1349720105200, 1349720105300, 1349720105400, 1349720105500],
....: unit="ms",
....: )
....:
Out[59]:
DatetimeIndex(['2012-10-08 18:15:05.100000', '2012-10-08 18:15:05.200000',
'2012-10-08 18:15:05.300000', '2012-10-08 18:15:05.400000',
'2012-10-08 18:15:05.500000'],
dtype='datetime64[ns]', freq=None)
Note
The unit parameter does not use the same strings as the format parameter
that was discussed above). The
available units are listed on the documentation for pandas.to_datetime().
Changed in version 1.0.0.
Constructing a Timestamp or DatetimeIndex with an epoch timestamp
with the tz argument specified will raise a ValueError. If you have
epochs in wall time in another timezone, you can read the epochs
as timezone-naive timestamps and then localize to the appropriate timezone:
In [60]: pd.Timestamp(1262347200000000000).tz_localize("US/Pacific")
Out[60]: Timestamp('2010-01-01 12:00:00-0800', tz='US/Pacific')
In [61]: pd.DatetimeIndex([1262347200000000000]).tz_localize("US/Pacific")
Out[61]: DatetimeIndex(['2010-01-01 12:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None)
Note
Epoch times will be rounded to the nearest nanosecond.
Warning
Conversion of float epoch times can lead to inaccurate and unexpected results.
Python floats have about 15 digits precision in
decimal. Rounding during conversion from float to high precision Timestamp is
unavoidable. The only way to achieve exact precision is to use a fixed-width
types (e.g. an int64).
In [62]: pd.to_datetime([1490195805.433, 1490195805.433502912], unit="s")
Out[62]: DatetimeIndex(['2017-03-22 15:16:45.433000088', '2017-03-22 15:16:45.433502913'], dtype='datetime64[ns]', freq=None)
In [63]: pd.to_datetime(1490195805433502912, unit="ns")
Out[63]: Timestamp('2017-03-22 15:16:45.433502912')
See also
Using the origin parameter
From timestamps to epoch#
To invert the operation from above, namely, to convert from a Timestamp to a ‘unix’ epoch:
In [64]: stamps = pd.date_range("2012-10-08 18:15:05", periods=4, freq="D")
In [65]: stamps
Out[65]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05'],
dtype='datetime64[ns]', freq='D')
We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by the
“unit” (1 second).
In [66]: (stamps - pd.Timestamp("1970-01-01")) // pd.Timedelta("1s")
Out[66]: Int64Index([1349720105, 1349806505, 1349892905, 1349979305], dtype='int64')
Using the origin parameter#
Using the origin parameter, one can specify an alternative starting point for creation
of a DatetimeIndex. For example, to use 1960-01-01 as the starting date:
In [67]: pd.to_datetime([1, 2, 3], unit="D", origin=pd.Timestamp("1960-01-01"))
Out[67]: DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00.
Commonly called ‘unix epoch’ or POSIX time.
In [68]: pd.to_datetime([1, 2, 3], unit="D")
Out[68]: DatetimeIndex(['1970-01-02', '1970-01-03', '1970-01-04'], dtype='datetime64[ns]', freq=None)
Generating ranges of timestamps#
To generate an index with timestamps, you can use either the DatetimeIndex or
Index constructor and pass in a list of datetime objects:
In [69]: dates = [
....: datetime.datetime(2012, 5, 1),
....: datetime.datetime(2012, 5, 2),
....: datetime.datetime(2012, 5, 3),
....: ]
....:
# Note the frequency information
In [70]: index = pd.DatetimeIndex(dates)
In [71]: index
Out[71]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
# Automatically converted to DatetimeIndex
In [72]: index = pd.Index(dates)
In [73]: index
Out[73]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In practice this becomes very cumbersome because we often need a very long
index with a large number of timestamps. If we need timestamps on a regular
frequency, we can use the date_range() and bdate_range() functions
to create a DatetimeIndex. The default frequency for date_range is a
calendar day while the default for bdate_range is a business day:
In [74]: start = datetime.datetime(2011, 1, 1)
In [75]: end = datetime.datetime(2012, 1, 1)
In [76]: index = pd.date_range(start, end)
In [77]: index
Out[77]:
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04',
'2011-01-05', '2011-01-06', '2011-01-07', '2011-01-08',
'2011-01-09', '2011-01-10',
...
'2011-12-23', '2011-12-24', '2011-12-25', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30',
'2011-12-31', '2012-01-01'],
dtype='datetime64[ns]', length=366, freq='D')
In [78]: index = pd.bdate_range(start, end)
In [79]: index
Out[79]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14',
...
'2011-12-19', '2011-12-20', '2011-12-21', '2011-12-22',
'2011-12-23', '2011-12-26', '2011-12-27', '2011-12-28',
'2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', length=260, freq='B')
Convenience functions like date_range and bdate_range can utilize a
variety of frequency aliases:
In [80]: pd.date_range(start, periods=1000, freq="M")
Out[80]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-30',
'2011-05-31', '2011-06-30', '2011-07-31', '2011-08-31',
'2011-09-30', '2011-10-31',
...
'2093-07-31', '2093-08-31', '2093-09-30', '2093-10-31',
'2093-11-30', '2093-12-31', '2094-01-31', '2094-02-28',
'2094-03-31', '2094-04-30'],
dtype='datetime64[ns]', length=1000, freq='M')
In [81]: pd.bdate_range(start, periods=250, freq="BQS")
Out[81]:
DatetimeIndex(['2011-01-03', '2011-04-01', '2011-07-01', '2011-10-03',
'2012-01-02', '2012-04-02', '2012-07-02', '2012-10-01',
'2013-01-01', '2013-04-01',
...
'2071-01-01', '2071-04-01', '2071-07-01', '2071-10-01',
'2072-01-01', '2072-04-01', '2072-07-01', '2072-10-03',
'2073-01-02', '2073-04-03'],
dtype='datetime64[ns]', length=250, freq='BQS-JAN')
date_range and bdate_range make it easy to generate a range of dates
using various combinations of parameters like start, end, periods,
and freq. The start and end dates are strictly inclusive, so dates outside
of those specified will not be generated:
In [82]: pd.date_range(start, end, freq="BM")
Out[82]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [83]: pd.date_range(start, end, freq="W")
Out[83]:
DatetimeIndex(['2011-01-02', '2011-01-09', '2011-01-16', '2011-01-23',
'2011-01-30', '2011-02-06', '2011-02-13', '2011-02-20',
'2011-02-27', '2011-03-06', '2011-03-13', '2011-03-20',
'2011-03-27', '2011-04-03', '2011-04-10', '2011-04-17',
'2011-04-24', '2011-05-01', '2011-05-08', '2011-05-15',
'2011-05-22', '2011-05-29', '2011-06-05', '2011-06-12',
'2011-06-19', '2011-06-26', '2011-07-03', '2011-07-10',
'2011-07-17', '2011-07-24', '2011-07-31', '2011-08-07',
'2011-08-14', '2011-08-21', '2011-08-28', '2011-09-04',
'2011-09-11', '2011-09-18', '2011-09-25', '2011-10-02',
'2011-10-09', '2011-10-16', '2011-10-23', '2011-10-30',
'2011-11-06', '2011-11-13', '2011-11-20', '2011-11-27',
'2011-12-04', '2011-12-11', '2011-12-18', '2011-12-25',
'2012-01-01'],
dtype='datetime64[ns]', freq='W-SUN')
In [84]: pd.bdate_range(end=end, periods=20)
Out[84]:
DatetimeIndex(['2011-12-05', '2011-12-06', '2011-12-07', '2011-12-08',
'2011-12-09', '2011-12-12', '2011-12-13', '2011-12-14',
'2011-12-15', '2011-12-16', '2011-12-19', '2011-12-20',
'2011-12-21', '2011-12-22', '2011-12-23', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', freq='B')
In [85]: pd.bdate_range(start=start, periods=20)
Out[85]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14', '2011-01-17', '2011-01-18',
'2011-01-19', '2011-01-20', '2011-01-21', '2011-01-24',
'2011-01-25', '2011-01-26', '2011-01-27', '2011-01-28'],
dtype='datetime64[ns]', freq='B')
Specifying start, end, and periods will generate a range of evenly spaced
dates from start to end inclusively, with periods number of elements in the
resulting DatetimeIndex:
In [86]: pd.date_range("2018-01-01", "2018-01-05", periods=5)
Out[86]:
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05'],
dtype='datetime64[ns]', freq=None)
In [87]: pd.date_range("2018-01-01", "2018-01-05", periods=10)
Out[87]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 10:40:00',
'2018-01-01 21:20:00', '2018-01-02 08:00:00',
'2018-01-02 18:40:00', '2018-01-03 05:20:00',
'2018-01-03 16:00:00', '2018-01-04 02:40:00',
'2018-01-04 13:20:00', '2018-01-05 00:00:00'],
dtype='datetime64[ns]', freq=None)
Custom frequency ranges#
bdate_range can also generate a range of custom frequency dates by using
the weekmask and holidays parameters. These parameters will only be
used if a custom frequency string is passed.
In [88]: weekmask = "Mon Wed Fri"
In [89]: holidays = [datetime.datetime(2011, 1, 5), datetime.datetime(2011, 3, 14)]
In [90]: pd.bdate_range(start, end, freq="C", weekmask=weekmask, holidays=holidays)
Out[90]:
DatetimeIndex(['2011-01-03', '2011-01-07', '2011-01-10', '2011-01-12',
'2011-01-14', '2011-01-17', '2011-01-19', '2011-01-21',
'2011-01-24', '2011-01-26',
...
'2011-12-09', '2011-12-12', '2011-12-14', '2011-12-16',
'2011-12-19', '2011-12-21', '2011-12-23', '2011-12-26',
'2011-12-28', '2011-12-30'],
dtype='datetime64[ns]', length=154, freq='C')
In [91]: pd.bdate_range(start, end, freq="CBMS", weekmask=weekmask)
Out[91]:
DatetimeIndex(['2011-01-03', '2011-02-02', '2011-03-02', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-02', '2011-10-03', '2011-11-02', '2011-12-02'],
dtype='datetime64[ns]', freq='CBMS')
See also
Custom business days
Timestamp limitations#
Since pandas represents timestamps in nanosecond resolution, the time span that
can be represented using a 64-bit integer is limited to approximately 584 years:
In [92]: pd.Timestamp.min
Out[92]: Timestamp('1677-09-21 00:12:43.145224193')
In [93]: pd.Timestamp.max
Out[93]: Timestamp('2262-04-11 23:47:16.854775807')
See also
Representing out-of-bounds spans
Indexing#
One of the main uses for DatetimeIndex is as an index for pandas objects.
The DatetimeIndex class contains many time series related optimizations:
A large range of dates for various offsets are pre-computed and cached
under the hood in order to make generating subsequent date ranges very fast
(just have to grab a slice).
Fast shifting using the shift method on pandas objects.
Unioning of overlapping DatetimeIndex objects with the same frequency is
very fast (important for fast data alignment).
Quick access to date fields via properties such as year, month, etc.
Regularization functions like snap and very fast asof logic.
DatetimeIndex objects have all the basic functionality of regular Index
objects, and a smorgasbord of advanced time series specific methods for easy
frequency processing.
See also
Reindexing methods
Note
While pandas does not force you to have a sorted date index, some of these
methods may have unexpected or incorrect behavior if the dates are unsorted.
DatetimeIndex can be used like a regular index and offers all of its
intelligent functionality like selection, slicing, etc.
In [94]: rng = pd.date_range(start, end, freq="BM")
In [95]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [96]: ts.index
Out[96]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [97]: ts[:5].index
Out[97]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31'],
dtype='datetime64[ns]', freq='BM')
In [98]: ts[::2].index
Out[98]:
DatetimeIndex(['2011-01-31', '2011-03-31', '2011-05-31', '2011-07-29',
'2011-09-30', '2011-11-30'],
dtype='datetime64[ns]', freq='2BM')
Partial string indexing#
Dates and strings that parse to timestamps can be passed as indexing parameters:
In [99]: ts["1/31/2011"]
Out[99]: 0.11920871129693428
In [100]: ts[datetime.datetime(2011, 12, 25):]
Out[100]:
2011-12-30 0.56702
Freq: BM, dtype: float64
In [101]: ts["10/31/2011":"12/31/2011"]
Out[101]:
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
To provide convenience for accessing longer time series, you can also pass in
the year or year and month as strings:
In [102]: ts["2011"]
Out[102]:
2011-01-31 0.119209
2011-02-28 -1.044236
2011-03-31 -0.861849
2011-04-29 -2.104569
2011-05-31 -0.494929
2011-06-30 1.071804
2011-07-29 0.721555
2011-08-31 -0.706771
2011-09-30 -1.039575
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
In [103]: ts["2011-6"]
Out[103]:
2011-06-30 1.071804
Freq: BM, dtype: float64
This type of slicing will work on a DataFrame with a DatetimeIndex as well. Since the
partial string selection is a form of label slicing, the endpoints will be included. This
would include matching times on an included date:
Warning
Indexing DataFrame rows with a single string with getitem (e.g. frame[dtstring])
is deprecated starting with pandas 1.2.0 (given the ambiguity whether it is indexing
the rows or selecting a column) and will be removed in a future version. The equivalent
with .loc (e.g. frame.loc[dtstring]) is still supported.
In [104]: dft = pd.DataFrame(
.....: np.random.randn(100000, 1),
.....: columns=["A"],
.....: index=pd.date_range("20130101", periods=100000, freq="T"),
.....: )
.....:
In [105]: dft
Out[105]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
In [106]: dft.loc["2013"]
Out[106]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
This starts on the very first time in the month, and includes the last date and
time for the month:
In [107]: dft["2013-1":"2013-2"]
Out[107]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies a stop time that includes all of the times on the last day:
In [108]: dft["2013-1":"2013-2-28"]
Out[108]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies an exact stop time (and is not the same as the above):
In [109]: dft["2013-1":"2013-2-28 00:00:00"]
Out[109]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
We are stopping on the included end-point as it is part of the index:
In [110]: dft["2013-1-15":"2013-1-15 12:30:00"]
Out[110]:
A
2013-01-15 00:00:00 -0.984810
2013-01-15 00:01:00 0.941451
2013-01-15 00:02:00 1.559365
2013-01-15 00:03:00 1.034374
2013-01-15 00:04:00 -1.480656
... ...
2013-01-15 12:26:00 0.371454
2013-01-15 12:27:00 -0.930806
2013-01-15 12:28:00 -0.069177
2013-01-15 12:29:00 0.066510
2013-01-15 12:30:00 -0.003945
[751 rows x 1 columns]
DatetimeIndex partial string indexing also works on a DataFrame with a MultiIndex:
In [111]: dft2 = pd.DataFrame(
.....: np.random.randn(20, 1),
.....: columns=["A"],
.....: index=pd.MultiIndex.from_product(
.....: [pd.date_range("20130101", periods=10, freq="12H"), ["a", "b"]]
.....: ),
.....: )
.....:
In [112]: dft2
Out[112]:
A
2013-01-01 00:00:00 a -0.298694
b 0.823553
2013-01-01 12:00:00 a 0.943285
b -1.479399
2013-01-02 00:00:00 a -1.643342
... ...
2013-01-04 12:00:00 b 0.069036
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
[20 rows x 1 columns]
In [113]: dft2.loc["2013-01-05"]
Out[113]:
A
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
In [114]: idx = pd.IndexSlice
In [115]: dft2 = dft2.swaplevel(0, 1).sort_index()
In [116]: dft2.loc[idx[:, "2013-01-05"], :]
Out[116]:
A
a 2013-01-05 00:00:00 0.122297
2013-01-05 12:00:00 0.370079
b 2013-01-05 00:00:00 1.422060
2013-01-05 12:00:00 1.016331
New in version 0.25.0.
Slicing with string indexing also honors UTC offset.
In [117]: df = pd.DataFrame([0], index=pd.DatetimeIndex(["2019-01-01"], tz="US/Pacific"))
In [118]: df
Out[118]:
0
2019-01-01 00:00:00-08:00 0
In [119]: df["2019-01-01 12:00:00+04:00":"2019-01-01 13:00:00+04:00"]
Out[119]:
0
2019-01-01 00:00:00-08:00 0
Slice vs. exact match#
The same string used as an indexing parameter can be treated either as a slice or as an exact match depending on the resolution of the index. If the string is less accurate than the index, it will be treated as a slice, otherwise as an exact match.
Consider a Series object with a minute resolution index:
In [120]: series_minute = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:00", "2012-01-01 00:00:00", "2012-01-01 00:02:00"]
.....: ),
.....: )
.....:
In [121]: series_minute.index.resolution
Out[121]: 'minute'
A timestamp string less accurate than a minute gives a Series object.
In [122]: series_minute["2011-12-31 23"]
Out[122]:
2011-12-31 23:59:00 1
dtype: int64
A timestamp string with minute resolution (or more accurate), gives a scalar instead, i.e. it is not casted to a slice.
In [123]: series_minute["2011-12-31 23:59"]
Out[123]: 1
In [124]: series_minute["2011-12-31 23:59:00"]
Out[124]: 1
If index resolution is second, then the minute-accurate timestamp gives a
Series.
In [125]: series_second = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:59", "2012-01-01 00:00:00", "2012-01-01 00:00:01"]
.....: ),
.....: )
.....:
In [126]: series_second.index.resolution
Out[126]: 'second'
In [127]: series_second["2011-12-31 23:59"]
Out[127]:
2011-12-31 23:59:59 1
dtype: int64
If the timestamp string is treated as a slice, it can be used to index DataFrame with .loc[] as well.
In [128]: dft_minute = pd.DataFrame(
.....: {"a": [1, 2, 3], "b": [4, 5, 6]}, index=series_minute.index
.....: )
.....:
In [129]: dft_minute.loc["2011-12-31 23"]
Out[129]:
a b
2011-12-31 23:59:00 1 4
Warning
However, if the string is treated as an exact match, the selection in DataFrame’s [] will be column-wise and not row-wise, see Indexing Basics. For example dft_minute['2011-12-31 23:59'] will raise KeyError as '2012-12-31 23:59' has the same resolution as the index and there is no column with such name:
To always have unambiguous selection, whether the row is treated as a slice or a single selection, use .loc.
In [130]: dft_minute.loc["2011-12-31 23:59"]
Out[130]:
a 1
b 4
Name: 2011-12-31 23:59:00, dtype: int64
Note also that DatetimeIndex resolution cannot be less precise than day.
In [131]: series_monthly = pd.Series(
.....: [1, 2, 3], pd.DatetimeIndex(["2011-12", "2012-01", "2012-02"])
.....: )
.....:
In [132]: series_monthly.index.resolution
Out[132]: 'day'
In [133]: series_monthly["2011-12"] # returns Series
Out[133]:
2011-12-01 1
dtype: int64
Exact indexing#
As discussed in previous section, indexing a DatetimeIndex with a partial string depends on the “accuracy” of the period, in other words how specific the interval is in relation to the resolution of the index. In contrast, indexing with Timestamp or datetime objects is exact, because the objects have exact meaning. These also follow the semantics of including both endpoints.
These Timestamp and datetime objects have exact hours, minutes, and seconds, even though they were not explicitly specified (they are 0).
In [134]: dft[datetime.datetime(2013, 1, 1): datetime.datetime(2013, 2, 28)]
Out[134]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
With no defaults.
In [135]: dft[
.....: datetime.datetime(2013, 1, 1, 10, 12, 0): datetime.datetime(
.....: 2013, 2, 28, 10, 12, 0
.....: )
.....: ]
.....:
Out[135]:
A
2013-01-01 10:12:00 0.565375
2013-01-01 10:13:00 0.068184
2013-01-01 10:14:00 0.788871
2013-01-01 10:15:00 -0.280343
2013-01-01 10:16:00 0.931536
... ...
2013-02-28 10:08:00 0.148098
2013-02-28 10:09:00 -0.388138
2013-02-28 10:10:00 0.139348
2013-02-28 10:11:00 0.085288
2013-02-28 10:12:00 0.950146
[83521 rows x 1 columns]
Truncating & fancy indexing#
A truncate() convenience function is provided that is similar
to slicing. Note that truncate assumes a 0 value for any unspecified date
component in a DatetimeIndex in contrast to slicing which returns any
partially matching dates:
In [136]: rng2 = pd.date_range("2011-01-01", "2012-01-01", freq="W")
In [137]: ts2 = pd.Series(np.random.randn(len(rng2)), index=rng2)
In [138]: ts2.truncate(before="2011-11", after="2011-12")
Out[138]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
Freq: W-SUN, dtype: float64
In [139]: ts2["2011-11":"2011-12"]
Out[139]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
2011-12-04 0.046611
2011-12-11 0.059478
2011-12-18 -0.286539
2011-12-25 0.841669
Freq: W-SUN, dtype: float64
Even complicated fancy indexing that breaks the DatetimeIndex frequency
regularity will result in a DatetimeIndex, although frequency is lost:
In [140]: ts2[[0, 2, 6]].index
Out[140]: DatetimeIndex(['2011-01-02', '2011-01-16', '2011-02-13'], dtype='datetime64[ns]', freq=None)
Time/date components#
There are several time/date properties that one can access from Timestamp or a collection of timestamps like a DatetimeIndex.
Property
Description
year
The year of the datetime
month
The month of the datetime
day
The days of the datetime
hour
The hour of the datetime
minute
The minutes of the datetime
second
The seconds of the datetime
microsecond
The microseconds of the datetime
nanosecond
The nanoseconds of the datetime
date
Returns datetime.date (does not contain timezone information)
time
Returns datetime.time (does not contain timezone information)
timetz
Returns datetime.time as local time with timezone information
dayofyear
The ordinal day of year
day_of_year
The ordinal day of year
weekofyear
The week ordinal of the year
week
The week ordinal of the year
dayofweek
The number of the day of the week with Monday=0, Sunday=6
day_of_week
The number of the day of the week with Monday=0, Sunday=6
weekday
The number of the day of the week with Monday=0, Sunday=6
quarter
Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc.
days_in_month
The number of days in the month of the datetime
is_month_start
Logical indicating if first day of month (defined by frequency)
is_month_end
Logical indicating if last day of month (defined by frequency)
is_quarter_start
Logical indicating if first day of quarter (defined by frequency)
is_quarter_end
Logical indicating if last day of quarter (defined by frequency)
is_year_start
Logical indicating if first day of year (defined by frequency)
is_year_end
Logical indicating if last day of year (defined by frequency)
is_leap_year
Logical indicating if the date belongs to a leap year
Furthermore, if you have a Series with datetimelike values, then you can
access these properties via the .dt accessor, as detailed in the section
on .dt accessors.
New in version 1.1.0.
You may obtain the year, week and day components of the ISO year from the ISO 8601 standard:
In [141]: idx = pd.date_range(start="2019-12-29", freq="D", periods=4)
In [142]: idx.isocalendar()
Out[142]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
In [143]: idx.to_series().dt.isocalendar()
Out[143]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
DateOffset objects#
In the preceding examples, frequency strings (e.g. 'D') were used to specify
a frequency that defined:
how the date times in DatetimeIndex were spaced when using date_range()
the frequency of a Period or PeriodIndex
These frequency strings map to a DateOffset object and its subclasses. A DateOffset
is similar to a Timedelta that represents a duration of time but follows specific calendar duration rules.
For example, a Timedelta day will always increment datetimes by 24 hours, while a DateOffset day
will increment datetimes to the same time the next day whether a day represents 23, 24 or 25 hours due to daylight
savings time. However, all DateOffset subclasses that are an hour or smaller
(Hour, Minute, Second, Milli, Micro, Nano) behave like
Timedelta and respect absolute time.
The basic DateOffset acts similar to dateutil.relativedelta (relativedelta documentation)
that shifts a date time by the corresponding calendar duration specified. The
arithmetic operator (+) can be used to perform the shift.
# This particular day contains a day light savings time transition
In [144]: ts = pd.Timestamp("2016-10-30 00:00:00", tz="Europe/Helsinki")
# Respects absolute time
In [145]: ts + pd.Timedelta(days=1)
Out[145]: Timestamp('2016-10-30 23:00:00+0200', tz='Europe/Helsinki')
# Respects calendar time
In [146]: ts + pd.DateOffset(days=1)
Out[146]: Timestamp('2016-10-31 00:00:00+0200', tz='Europe/Helsinki')
In [147]: friday = pd.Timestamp("2018-01-05")
In [148]: friday.day_name()
Out[148]: 'Friday'
# Add 2 business days (Friday --> Tuesday)
In [149]: two_business_days = 2 * pd.offsets.BDay()
In [150]: friday + two_business_days
Out[150]: Timestamp('2018-01-09 00:00:00')
In [151]: (friday + two_business_days).day_name()
Out[151]: 'Tuesday'
Most DateOffsets have associated frequencies strings, or offset aliases, that can be passed
into freq keyword arguments. The available date offsets and associated frequency strings can be found below:
Date Offset
Frequency String
Description
DateOffset
None
Generic offset class, defaults to absolute 24 hours
BDay or BusinessDay
'B'
business day (weekday)
CDay or CustomBusinessDay
'C'
custom business day
Week
'W'
one week, optionally anchored on a day of the week
WeekOfMonth
'WOM'
the x-th day of the y-th week of each month
LastWeekOfMonth
'LWOM'
the x-th day of the last week of each month
MonthEnd
'M'
calendar month end
MonthBegin
'MS'
calendar month begin
BMonthEnd or BusinessMonthEnd
'BM'
business month end
BMonthBegin or BusinessMonthBegin
'BMS'
business month begin
CBMonthEnd or CustomBusinessMonthEnd
'CBM'
custom business month end
CBMonthBegin or CustomBusinessMonthBegin
'CBMS'
custom business month begin
SemiMonthEnd
'SM'
15th (or other day_of_month) and calendar month end
SemiMonthBegin
'SMS'
15th (or other day_of_month) and calendar month begin
QuarterEnd
'Q'
calendar quarter end
QuarterBegin
'QS'
calendar quarter begin
BQuarterEnd
'BQ
business quarter end
BQuarterBegin
'BQS'
business quarter begin
FY5253Quarter
'REQ'
retail (aka 52-53 week) quarter
YearEnd
'A'
calendar year end
YearBegin
'AS' or 'BYS'
calendar year begin
BYearEnd
'BA'
business year end
BYearBegin
'BAS'
business year begin
FY5253
'RE'
retail (aka 52-53 week) year
Easter
None
Easter holiday
BusinessHour
'BH'
business hour
CustomBusinessHour
'CBH'
custom business hour
Day
'D'
one absolute day
Hour
'H'
one hour
Minute
'T' or 'min'
one minute
Second
'S'
one second
Milli
'L' or 'ms'
one millisecond
Micro
'U' or 'us'
one microsecond
Nano
'N'
one nanosecond
DateOffsets additionally have rollforward() and rollback()
methods for moving a date forward or backward respectively to a valid offset
date relative to the offset. For example, business offsets will roll dates
that land on the weekends (Saturday and Sunday) forward to Monday since
business offsets operate on the weekdays.
In [152]: ts = pd.Timestamp("2018-01-06 00:00:00")
In [153]: ts.day_name()
Out[153]: 'Saturday'
# BusinessHour's valid offset dates are Monday through Friday
In [154]: offset = pd.offsets.BusinessHour(start="09:00")
# Bring the date to the closest offset date (Monday)
In [155]: offset.rollforward(ts)
Out[155]: Timestamp('2018-01-08 09:00:00')
# Date is brought to the closest offset date first and then the hour is added
In [156]: ts + offset
Out[156]: Timestamp('2018-01-08 10:00:00')
These operations preserve time (hour, minute, etc) information by default.
To reset time to midnight, use normalize() before or after applying
the operation (depending on whether you want the time information included
in the operation).
In [157]: ts = pd.Timestamp("2014-01-01 09:00")
In [158]: day = pd.offsets.Day()
In [159]: day + ts
Out[159]: Timestamp('2014-01-02 09:00:00')
In [160]: (day + ts).normalize()
Out[160]: Timestamp('2014-01-02 00:00:00')
In [161]: ts = pd.Timestamp("2014-01-01 22:00")
In [162]: hour = pd.offsets.Hour()
In [163]: hour + ts
Out[163]: Timestamp('2014-01-01 23:00:00')
In [164]: (hour + ts).normalize()
Out[164]: Timestamp('2014-01-01 00:00:00')
In [165]: (hour + pd.Timestamp("2014-01-01 23:30")).normalize()
Out[165]: Timestamp('2014-01-02 00:00:00')
Parametric offsets#
Some of the offsets can be “parameterized” when created to result in different
behaviors. For example, the Week offset for generating weekly data accepts a
weekday parameter which results in the generated dates always lying on a
particular day of the week:
In [166]: d = datetime.datetime(2008, 8, 18, 9, 0)
In [167]: d
Out[167]: datetime.datetime(2008, 8, 18, 9, 0)
In [168]: d + pd.offsets.Week()
Out[168]: Timestamp('2008-08-25 09:00:00')
In [169]: d + pd.offsets.Week(weekday=4)
Out[169]: Timestamp('2008-08-22 09:00:00')
In [170]: (d + pd.offsets.Week(weekday=4)).weekday()
Out[170]: 4
In [171]: d - pd.offsets.Week()
Out[171]: Timestamp('2008-08-11 09:00:00')
The normalize option will be effective for addition and subtraction.
In [172]: d + pd.offsets.Week(normalize=True)
Out[172]: Timestamp('2008-08-25 00:00:00')
In [173]: d - pd.offsets.Week(normalize=True)
Out[173]: Timestamp('2008-08-11 00:00:00')
Another example is parameterizing YearEnd with the specific ending month:
In [174]: d + pd.offsets.YearEnd()
Out[174]: Timestamp('2008-12-31 09:00:00')
In [175]: d + pd.offsets.YearEnd(month=6)
Out[175]: Timestamp('2009-06-30 09:00:00')
Using offsets with Series / DatetimeIndex#
Offsets can be used with either a Series or DatetimeIndex to
apply the offset to each element.
In [176]: rng = pd.date_range("2012-01-01", "2012-01-03")
In [177]: s = pd.Series(rng)
In [178]: rng
Out[178]: DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03'], dtype='datetime64[ns]', freq='D')
In [179]: rng + pd.DateOffset(months=2)
Out[179]: DatetimeIndex(['2012-03-01', '2012-03-02', '2012-03-03'], dtype='datetime64[ns]', freq=None)
In [180]: s + pd.DateOffset(months=2)
Out[180]:
0 2012-03-01
1 2012-03-02
2 2012-03-03
dtype: datetime64[ns]
In [181]: s - pd.DateOffset(months=2)
Out[181]:
0 2011-11-01
1 2011-11-02
2 2011-11-03
dtype: datetime64[ns]
If the offset class maps directly to a Timedelta (Day, Hour,
Minute, Second, Micro, Milli, Nano) it can be
used exactly like a Timedelta - see the
Timedelta section for more examples.
In [182]: s - pd.offsets.Day(2)
Out[182]:
0 2011-12-30
1 2011-12-31
2 2012-01-01
dtype: datetime64[ns]
In [183]: td = s - pd.Series(pd.date_range("2011-12-29", "2011-12-31"))
In [184]: td
Out[184]:
0 3 days
1 3 days
2 3 days
dtype: timedelta64[ns]
In [185]: td + pd.offsets.Minute(15)
Out[185]:
0 3 days 00:15:00
1 3 days 00:15:00
2 3 days 00:15:00
dtype: timedelta64[ns]
Note that some offsets (such as BQuarterEnd) do not have a
vectorized implementation. They can still be used but may
calculate significantly slower and will show a PerformanceWarning
In [186]: rng + pd.offsets.BQuarterEnd()
Out[186]: DatetimeIndex(['2012-03-30', '2012-03-30', '2012-03-30'], dtype='datetime64[ns]', freq=None)
Custom business days#
The CDay or CustomBusinessDay class provides a parametric
BusinessDay class which can be used to create customized business day
calendars which account for local holidays and local weekend conventions.
As an interesting example, let’s look at Egypt where a Friday-Saturday weekend is observed.
In [187]: weekmask_egypt = "Sun Mon Tue Wed Thu"
# They also observe International Workers' Day so let's
# add that for a couple of years
In [188]: holidays = [
.....: "2012-05-01",
.....: datetime.datetime(2013, 5, 1),
.....: np.datetime64("2014-05-01"),
.....: ]
.....:
In [189]: bday_egypt = pd.offsets.CustomBusinessDay(
.....: holidays=holidays,
.....: weekmask=weekmask_egypt,
.....: )
.....:
In [190]: dt = datetime.datetime(2013, 4, 30)
In [191]: dt + 2 * bday_egypt
Out[191]: Timestamp('2013-05-05 00:00:00')
Let’s map to the weekday names:
In [192]: dts = pd.date_range(dt, periods=5, freq=bday_egypt)
In [193]: pd.Series(dts.weekday, dts).map(pd.Series("Mon Tue Wed Thu Fri Sat Sun".split()))
Out[193]:
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, dtype: object
Holiday calendars can be used to provide the list of holidays. See the
holiday calendar section for more information.
In [194]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [195]: bday_us = pd.offsets.CustomBusinessDay(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [196]: dt = datetime.datetime(2014, 1, 17)
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [197]: dt + bday_us
Out[197]: Timestamp('2014-01-21 00:00:00')
Monthly offsets that respect a certain holiday calendar can be defined
in the usual way.
In [198]: bmth_us = pd.offsets.CustomBusinessMonthBegin(calendar=USFederalHolidayCalendar())
# Skip new years
In [199]: dt = datetime.datetime(2013, 12, 17)
In [200]: dt + bmth_us
Out[200]: Timestamp('2014-01-02 00:00:00')
# Define date index with custom offset
In [201]: pd.date_range(start="20100101", end="20120101", freq=bmth_us)
Out[201]:
DatetimeIndex(['2010-01-04', '2010-02-01', '2010-03-01', '2010-04-01',
'2010-05-03', '2010-06-01', '2010-07-01', '2010-08-02',
'2010-09-01', '2010-10-01', '2010-11-01', '2010-12-01',
'2011-01-03', '2011-02-01', '2011-03-01', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-01', '2011-10-03', '2011-11-01', '2011-12-01'],
dtype='datetime64[ns]', freq='CBMS')
Note
The frequency string ‘C’ is used to indicate that a CustomBusinessDay
DateOffset is used, it is important to note that since CustomBusinessDay is
a parameterised type, instances of CustomBusinessDay may differ and this is
not detectable from the ‘C’ frequency string. The user therefore needs to
ensure that the ‘C’ frequency string is used consistently within the user’s
application.
Business hour#
The BusinessHour class provides a business hour representation on BusinessDay,
allowing to use specific start and end times.
By default, BusinessHour uses 9:00 - 17:00 as business hours.
Adding BusinessHour will increment Timestamp by hourly frequency.
If target Timestamp is out of business hours, move to the next business hour
then increment it. If the result exceeds the business hours end, the remaining
hours are added to the next business day.
In [202]: bh = pd.offsets.BusinessHour()
In [203]: bh
Out[203]: <BusinessHour: BH=09:00-17:00>
# 2014-08-01 is Friday
In [204]: pd.Timestamp("2014-08-01 10:00").weekday()
Out[204]: 4
In [205]: pd.Timestamp("2014-08-01 10:00") + bh
Out[205]: Timestamp('2014-08-01 11:00:00')
# Below example is the same as: pd.Timestamp('2014-08-01 09:00') + bh
In [206]: pd.Timestamp("2014-08-01 08:00") + bh
Out[206]: Timestamp('2014-08-01 10:00:00')
# If the results is on the end time, move to the next business day
In [207]: pd.Timestamp("2014-08-01 16:00") + bh
Out[207]: Timestamp('2014-08-04 09:00:00')
# Remainings are added to the next day
In [208]: pd.Timestamp("2014-08-01 16:30") + bh
Out[208]: Timestamp('2014-08-04 09:30:00')
# Adding 2 business hours
In [209]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(2)
Out[209]: Timestamp('2014-08-01 12:00:00')
# Subtracting 3 business hours
In [210]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(-3)
Out[210]: Timestamp('2014-07-31 15:00:00')
You can also specify start and end time by keywords. The argument must
be a str with an hour:minute representation or a datetime.time
instance. Specifying seconds, microseconds and nanoseconds as business hour
results in ValueError.
In [211]: bh = pd.offsets.BusinessHour(start="11:00", end=datetime.time(20, 0))
In [212]: bh
Out[212]: <BusinessHour: BH=11:00-20:00>
In [213]: pd.Timestamp("2014-08-01 13:00") + bh
Out[213]: Timestamp('2014-08-01 14:00:00')
In [214]: pd.Timestamp("2014-08-01 09:00") + bh
Out[214]: Timestamp('2014-08-01 12:00:00')
In [215]: pd.Timestamp("2014-08-01 18:00") + bh
Out[215]: Timestamp('2014-08-01 19:00:00')
Passing start time later than end represents midnight business hour.
In this case, business hour exceeds midnight and overlap to the next day.
Valid business hours are distinguished by whether it started from valid BusinessDay.
In [216]: bh = pd.offsets.BusinessHour(start="17:00", end="09:00")
In [217]: bh
Out[217]: <BusinessHour: BH=17:00-09:00>
In [218]: pd.Timestamp("2014-08-01 17:00") + bh
Out[218]: Timestamp('2014-08-01 18:00:00')
In [219]: pd.Timestamp("2014-08-01 23:00") + bh
Out[219]: Timestamp('2014-08-02 00:00:00')
# Although 2014-08-02 is Saturday,
# it is valid because it starts from 08-01 (Friday).
In [220]: pd.Timestamp("2014-08-02 04:00") + bh
Out[220]: Timestamp('2014-08-02 05:00:00')
# Although 2014-08-04 is Monday,
# it is out of business hours because it starts from 08-03 (Sunday).
In [221]: pd.Timestamp("2014-08-04 04:00") + bh
Out[221]: Timestamp('2014-08-04 18:00:00')
Applying BusinessHour.rollforward and rollback to out of business hours results in
the next business hour start or previous day’s end. Different from other offsets, BusinessHour.rollforward
may output different results from apply by definition.
This is because one day’s business hour end is equal to next day’s business hour start. For example,
under the default business hours (9:00 - 17:00), there is no gap (0 minutes) between 2014-08-01 17:00 and
2014-08-04 09:00.
# This adjusts a Timestamp to business hour edge
In [222]: pd.offsets.BusinessHour().rollback(pd.Timestamp("2014-08-02 15:00"))
Out[222]: Timestamp('2014-08-01 17:00:00')
In [223]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02 15:00"))
Out[223]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessHour() + pd.Timestamp('2014-08-01 17:00').
# And it is the same as BusinessHour() + pd.Timestamp('2014-08-04 09:00')
In [224]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02 15:00")
Out[224]: Timestamp('2014-08-04 10:00:00')
# BusinessDay results (for reference)
In [225]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02"))
Out[225]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessDay() + pd.Timestamp('2014-08-01')
# The result is the same as rollworward because BusinessDay never overlap.
In [226]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02")
Out[226]: Timestamp('2014-08-04 10:00:00')
BusinessHour regards Saturday and Sunday as holidays. To use arbitrary
holidays, you can use CustomBusinessHour offset, as explained in the
following subsection.
Custom business hour#
The CustomBusinessHour is a mixture of BusinessHour and CustomBusinessDay which
allows you to specify arbitrary holidays. CustomBusinessHour works as the same
as BusinessHour except that it skips specified custom holidays.
In [227]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [228]: bhour_us = pd.offsets.CustomBusinessHour(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [229]: dt = datetime.datetime(2014, 1, 17, 15)
In [230]: dt + bhour_us
Out[230]: Timestamp('2014-01-17 16:00:00')
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [231]: dt + bhour_us * 2
Out[231]: Timestamp('2014-01-21 09:00:00')
You can use keyword arguments supported by either BusinessHour and CustomBusinessDay.
In [232]: bhour_mon = pd.offsets.CustomBusinessHour(start="10:00", weekmask="Tue Wed Thu Fri")
# Monday is skipped because it's a holiday, business hour starts from 10:00
In [233]: dt + bhour_mon * 2
Out[233]: Timestamp('2014-01-21 10:00:00')
Offset aliases#
A number of string aliases are given to useful common time series
frequencies. We will refer to these aliases as offset aliases.
Alias
Description
B
business day frequency
C
custom business day frequency
D
calendar day frequency
W
weekly frequency
M
month end frequency
SM
semi-month end frequency (15th and end of month)
BM
business month end frequency
CBM
custom business month end frequency
MS
month start frequency
SMS
semi-month start frequency (1st and 15th)
BMS
business month start frequency
CBMS
custom business month start frequency
Q
quarter end frequency
BQ
business quarter end frequency
QS
quarter start frequency
BQS
business quarter start frequency
A, Y
year end frequency
BA, BY
business year end frequency
AS, YS
year start frequency
BAS, BYS
business year start frequency
BH
business hour frequency
H
hourly frequency
T, min
minutely frequency
S
secondly frequency
L, ms
milliseconds
U, us
microseconds
N
nanoseconds
Note
When using the offset aliases above, it should be noted that functions
such as date_range(), bdate_range(), will only return
timestamps that are in the interval defined by start_date and
end_date. If the start_date does not correspond to the frequency,
the returned timestamps will start at the next valid timestamp, same for
end_date, the returned timestamps will stop at the previous valid
timestamp.
For example, for the offset MS, if the start_date is not the first
of the month, the returned timestamps will start with the first day of the
next month. If end_date is not the first day of a month, the last
returned timestamp will be the first day of the corresponding month.
In [234]: dates_lst_1 = pd.date_range("2020-01-06", "2020-04-03", freq="MS")
In [235]: dates_lst_1
Out[235]: DatetimeIndex(['2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
In [236]: dates_lst_2 = pd.date_range("2020-01-01", "2020-04-01", freq="MS")
In [237]: dates_lst_2
Out[237]: DatetimeIndex(['2020-01-01', '2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
We can see in the above example date_range() and
bdate_range() will only return the valid timestamps between the
start_date and end_date. If these are not valid timestamps for the
given frequency it will roll to the next value for start_date
(respectively previous for the end_date)
Combining aliases#
As we have seen previously, the alias and the offset instance are fungible in
most functions:
In [238]: pd.date_range(start, periods=5, freq="B")
Out[238]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
In [239]: pd.date_range(start, periods=5, freq=pd.offsets.BDay())
Out[239]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
You can combine together day and intraday offsets:
In [240]: pd.date_range(start, periods=10, freq="2h20min")
Out[240]:
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 02:20:00',
'2011-01-01 04:40:00', '2011-01-01 07:00:00',
'2011-01-01 09:20:00', '2011-01-01 11:40:00',
'2011-01-01 14:00:00', '2011-01-01 16:20:00',
'2011-01-01 18:40:00', '2011-01-01 21:00:00'],
dtype='datetime64[ns]', freq='140T')
In [241]: pd.date_range(start, periods=10, freq="1D10U")
Out[241]:
DatetimeIndex([ '2011-01-01 00:00:00', '2011-01-02 00:00:00.000010',
'2011-01-03 00:00:00.000020', '2011-01-04 00:00:00.000030',
'2011-01-05 00:00:00.000040', '2011-01-06 00:00:00.000050',
'2011-01-07 00:00:00.000060', '2011-01-08 00:00:00.000070',
'2011-01-09 00:00:00.000080', '2011-01-10 00:00:00.000090'],
dtype='datetime64[ns]', freq='86400000010U')
Anchored offsets#
For some frequencies you can specify an anchoring suffix:
Alias
Description
W-SUN
weekly frequency (Sundays). Same as ‘W’
W-MON
weekly frequency (Mondays)
W-TUE
weekly frequency (Tuesdays)
W-WED
weekly frequency (Wednesdays)
W-THU
weekly frequency (Thursdays)
W-FRI
weekly frequency (Fridays)
W-SAT
weekly frequency (Saturdays)
(B)Q(S)-DEC
quarterly frequency, year ends in December. Same as ‘Q’
(B)Q(S)-JAN
quarterly frequency, year ends in January
(B)Q(S)-FEB
quarterly frequency, year ends in February
(B)Q(S)-MAR
quarterly frequency, year ends in March
(B)Q(S)-APR
quarterly frequency, year ends in April
(B)Q(S)-MAY
quarterly frequency, year ends in May
(B)Q(S)-JUN
quarterly frequency, year ends in June
(B)Q(S)-JUL
quarterly frequency, year ends in July
(B)Q(S)-AUG
quarterly frequency, year ends in August
(B)Q(S)-SEP
quarterly frequency, year ends in September
(B)Q(S)-OCT
quarterly frequency, year ends in October
(B)Q(S)-NOV
quarterly frequency, year ends in November
(B)A(S)-DEC
annual frequency, anchored end of December. Same as ‘A’
(B)A(S)-JAN
annual frequency, anchored end of January
(B)A(S)-FEB
annual frequency, anchored end of February
(B)A(S)-MAR
annual frequency, anchored end of March
(B)A(S)-APR
annual frequency, anchored end of April
(B)A(S)-MAY
annual frequency, anchored end of May
(B)A(S)-JUN
annual frequency, anchored end of June
(B)A(S)-JUL
annual frequency, anchored end of July
(B)A(S)-AUG
annual frequency, anchored end of August
(B)A(S)-SEP
annual frequency, anchored end of September
(B)A(S)-OCT
annual frequency, anchored end of October
(B)A(S)-NOV
annual frequency, anchored end of November
These can be used as arguments to date_range, bdate_range, constructors
for DatetimeIndex, as well as various other timeseries-related functions
in pandas.
Anchored offset semantics#
For those offsets that are anchored to the start or end of specific
frequency (MonthEnd, MonthBegin, WeekEnd, etc), the following
rules apply to rolling forward and backwards.
When n is not 0, if the given date is not on an anchor point, it snapped to the next(previous)
anchor point, and moved |n|-1 additional steps forwards or backwards.
In [242]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=1)
Out[242]: Timestamp('2014-02-01 00:00:00')
In [243]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=1)
Out[243]: Timestamp('2014-01-31 00:00:00')
In [244]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=1)
Out[244]: Timestamp('2014-01-01 00:00:00')
In [245]: pd.Timestamp("2014-01-02") - pd.offsets.MonthEnd(n=1)
Out[245]: Timestamp('2013-12-31 00:00:00')
In [246]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=4)
Out[246]: Timestamp('2014-05-01 00:00:00')
In [247]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=4)
Out[247]: Timestamp('2013-10-01 00:00:00')
If the given date is on an anchor point, it is moved |n| points forwards
or backwards.
In [248]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=1)
Out[248]: Timestamp('2014-02-01 00:00:00')
In [249]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=1)
Out[249]: Timestamp('2014-02-28 00:00:00')
In [250]: pd.Timestamp("2014-01-01") - pd.offsets.MonthBegin(n=1)
Out[250]: Timestamp('2013-12-01 00:00:00')
In [251]: pd.Timestamp("2014-01-31") - pd.offsets.MonthEnd(n=1)
Out[251]: Timestamp('2013-12-31 00:00:00')
In [252]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=4)
Out[252]: Timestamp('2014-05-01 00:00:00')
In [253]: pd.Timestamp("2014-01-31") - pd.offsets.MonthBegin(n=4)
Out[253]: Timestamp('2013-10-01 00:00:00')
For the case when n=0, the date is not moved if on an anchor point, otherwise
it is rolled forward to the next anchor point.
In [254]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=0)
Out[254]: Timestamp('2014-02-01 00:00:00')
In [255]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=0)
Out[255]: Timestamp('2014-01-31 00:00:00')
In [256]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=0)
Out[256]: Timestamp('2014-01-01 00:00:00')
In [257]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=0)
Out[257]: Timestamp('2014-01-31 00:00:00')
Holidays / holiday calendars#
Holidays and calendars provide a simple way to define holiday rules to be used
with CustomBusinessDay or in other analysis that requires a predefined
set of holidays. The AbstractHolidayCalendar class provides all the necessary
methods to return a list of holidays and only rules need to be defined
in a specific holiday calendar class. Furthermore, the start_date and end_date
class attributes determine over what date range holidays are generated. These
should be overwritten on the AbstractHolidayCalendar class to have the range
apply to all calendar subclasses. USFederalHolidayCalendar is the
only calendar that exists and primarily serves as an example for developing
other calendars.
For holidays that occur on fixed dates (e.g., US Memorial Day or July 4th) an
observance rule determines when that holiday is observed if it falls on a weekend
or some other non-observed day. Defined observance rules are:
Rule
Description
nearest_workday
move Saturday to Friday and Sunday to Monday
sunday_to_monday
move Sunday to following Monday
next_monday_or_tuesday
move Saturday to Monday and Sunday/Monday to Tuesday
previous_friday
move Saturday and Sunday to previous Friday”
next_monday
move Saturday and Sunday to following Monday
An example of how holidays and holiday calendars are defined:
In [258]: from pandas.tseries.holiday import (
.....: Holiday,
.....: USMemorialDay,
.....: AbstractHolidayCalendar,
.....: nearest_workday,
.....: MO,
.....: )
.....:
In [259]: class ExampleCalendar(AbstractHolidayCalendar):
.....: rules = [
.....: USMemorialDay,
.....: Holiday("July 4th", month=7, day=4, observance=nearest_workday),
.....: Holiday(
.....: "Columbus Day",
.....: month=10,
.....: day=1,
.....: offset=pd.DateOffset(weekday=MO(2)),
.....: ),
.....: ]
.....:
In [260]: cal = ExampleCalendar()
In [261]: cal.holidays(datetime.datetime(2012, 1, 1), datetime.datetime(2012, 12, 31))
Out[261]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
hint
weekday=MO(2) is same as 2 * Week(weekday=2)
Using this calendar, creating an index or doing offset arithmetic skips weekends
and holidays (i.e., Memorial Day/July 4th). For example, the below defines
a custom business day offset using the ExampleCalendar. Like any other offset,
it can be used to create a DatetimeIndex or added to datetime
or Timestamp objects.
In [262]: pd.date_range(
.....: start="7/1/2012", end="7/10/2012", freq=pd.offsets.CDay(calendar=cal)
.....: ).to_pydatetime()
.....:
Out[262]:
array([datetime.datetime(2012, 7, 2, 0, 0),
datetime.datetime(2012, 7, 3, 0, 0),
datetime.datetime(2012, 7, 5, 0, 0),
datetime.datetime(2012, 7, 6, 0, 0),
datetime.datetime(2012, 7, 9, 0, 0),
datetime.datetime(2012, 7, 10, 0, 0)], dtype=object)
In [263]: offset = pd.offsets.CustomBusinessDay(calendar=cal)
In [264]: datetime.datetime(2012, 5, 25) + offset
Out[264]: Timestamp('2012-05-29 00:00:00')
In [265]: datetime.datetime(2012, 7, 3) + offset
Out[265]: Timestamp('2012-07-05 00:00:00')
In [266]: datetime.datetime(2012, 7, 3) + 2 * offset
Out[266]: Timestamp('2012-07-06 00:00:00')
In [267]: datetime.datetime(2012, 7, 6) + offset
Out[267]: Timestamp('2012-07-09 00:00:00')
Ranges are defined by the start_date and end_date class attributes
of AbstractHolidayCalendar. The defaults are shown below.
In [268]: AbstractHolidayCalendar.start_date
Out[268]: Timestamp('1970-01-01 00:00:00')
In [269]: AbstractHolidayCalendar.end_date
Out[269]: Timestamp('2200-12-31 00:00:00')
These dates can be overwritten by setting the attributes as
datetime/Timestamp/string.
In [270]: AbstractHolidayCalendar.start_date = datetime.datetime(2012, 1, 1)
In [271]: AbstractHolidayCalendar.end_date = datetime.datetime(2012, 12, 31)
In [272]: cal.holidays()
Out[272]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
Every calendar class is accessible by name using the get_calendar function
which returns a holiday class instance. Any imported calendar class will
automatically be available by this function. Also, HolidayCalendarFactory
provides an easy interface to create calendars that are combinations of calendars
or calendars with additional rules.
In [273]: from pandas.tseries.holiday import get_calendar, HolidayCalendarFactory, USLaborDay
In [274]: cal = get_calendar("ExampleCalendar")
In [275]: cal.rules
Out[275]:
[Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
In [276]: new_cal = HolidayCalendarFactory("NewExampleCalendar", cal, USLaborDay)
In [277]: new_cal.rules
Out[277]:
[Holiday: Labor Day (month=9, day=1, offset=<DateOffset: weekday=MO(+1)>),
Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
Time Series-related instance methods#
Shifting / lagging#
One may want to shift or lag the values in a time series back and forward in
time. The method for this is shift(), which is available on all of
the pandas objects.
In [278]: ts = pd.Series(range(len(rng)), index=rng)
In [279]: ts = ts[:5]
In [280]: ts.shift(1)
Out[280]:
2012-01-01 NaN
2012-01-02 0.0
2012-01-03 1.0
Freq: D, dtype: float64
The shift method accepts an freq argument which can accept a
DateOffset class or other timedelta-like object or also an
offset alias.
When freq is specified, shift method changes all the dates in the index
rather than changing the alignment of the data and the index:
In [281]: ts.shift(5, freq="D")
Out[281]:
2012-01-06 0
2012-01-07 1
2012-01-08 2
Freq: D, dtype: int64
In [282]: ts.shift(5, freq=pd.offsets.BDay())
Out[282]:
2012-01-06 0
2012-01-09 1
2012-01-10 2
dtype: int64
In [283]: ts.shift(5, freq="BM")
Out[283]:
2012-05-31 0
2012-05-31 1
2012-05-31 2
dtype: int64
Note that with when freq is specified, the leading entry is no longer NaN
because the data is not being realigned.
Frequency conversion#
The primary function for changing frequencies is the asfreq()
method. For a DatetimeIndex, this is basically just a thin, but convenient
wrapper around reindex() which generates a date_range and
calls reindex.
In [284]: dr = pd.date_range("1/1/2010", periods=3, freq=3 * pd.offsets.BDay())
In [285]: ts = pd.Series(np.random.randn(3), index=dr)
In [286]: ts
Out[286]:
2010-01-01 1.494522
2010-01-06 -0.778425
2010-01-11 -0.253355
Freq: 3B, dtype: float64
In [287]: ts.asfreq(pd.offsets.BDay())
Out[287]:
2010-01-01 1.494522
2010-01-04 NaN
2010-01-05 NaN
2010-01-06 -0.778425
2010-01-07 NaN
2010-01-08 NaN
2010-01-11 -0.253355
Freq: B, dtype: float64
asfreq provides a further convenience so you can specify an interpolation
method for any gaps that may appear after the frequency conversion.
In [288]: ts.asfreq(pd.offsets.BDay(), method="pad")
Out[288]:
2010-01-01 1.494522
2010-01-04 1.494522
2010-01-05 1.494522
2010-01-06 -0.778425
2010-01-07 -0.778425
2010-01-08 -0.778425
2010-01-11 -0.253355
Freq: B, dtype: float64
Filling forward / backward#
Related to asfreq and reindex is fillna(), which is
documented in the missing data section.
Converting to Python datetimes#
DatetimeIndex can be converted to an array of Python native
datetime.datetime objects using the to_pydatetime method.
Resampling#
pandas has a simple, powerful, and efficient functionality for performing
resampling operations during frequency conversion (e.g., converting secondly
data into 5-minutely data). This is extremely common in, but not limited to,
financial applications.
resample() is a time-based groupby, followed by a reduction method
on each of its groups. See some cookbook examples for
some advanced strategies.
The resample() method can be used directly from DataFrameGroupBy objects,
see the groupby docs.
Basics#
In [289]: rng = pd.date_range("1/1/2012", periods=100, freq="S")
In [290]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
In [291]: ts.resample("5Min").sum()
Out[291]:
2012-01-01 25103
Freq: 5T, dtype: int64
The resample function is very flexible and allows you to specify many
different parameters to control the frequency conversion and resampling
operation.
Any function available via dispatching is available as
a method of the returned object, including sum, mean, std, sem,
max, min, median, first, last, ohlc:
In [292]: ts.resample("5Min").mean()
Out[292]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [293]: ts.resample("5Min").ohlc()
Out[293]:
open high low close
2012-01-01 308 460 9 205
In [294]: ts.resample("5Min").max()
Out[294]:
2012-01-01 460
Freq: 5T, dtype: int64
For downsampling, closed can be set to ‘left’ or ‘right’ to specify which
end of the interval is closed:
In [295]: ts.resample("5Min", closed="right").mean()
Out[295]:
2011-12-31 23:55:00 308.000000
2012-01-01 00:00:00 250.454545
Freq: 5T, dtype: float64
In [296]: ts.resample("5Min", closed="left").mean()
Out[296]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Parameters like label are used to manipulate the resulting labels.
label specifies whether the result is labeled with the beginning or
the end of the interval.
In [297]: ts.resample("5Min").mean() # by default label='left'
Out[297]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [298]: ts.resample("5Min", label="left").mean()
Out[298]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Warning
The default values for label and closed is ‘left’ for all
frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’
which all have a default of ‘right’.
This might unintendedly lead to looking ahead, where the value for a later
time is pulled back to a previous time as in the following example with
the BusinessDay frequency:
In [299]: s = pd.date_range("2000-01-01", "2000-01-05").to_series()
In [300]: s.iloc[2] = pd.NaT
In [301]: s.dt.day_name()
Out[301]:
2000-01-01 Saturday
2000-01-02 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: D, dtype: object
# default: label='left', closed='left'
In [302]: s.resample("B").last().dt.day_name()
Out[302]:
1999-12-31 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
Notice how the value for Sunday got pulled back to the previous Friday.
To get the behavior where the value for Sunday is pushed to Monday, use
instead
In [303]: s.resample("B", label="right", closed="right").last().dt.day_name()
Out[303]:
2000-01-03 Sunday
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
The axis parameter can be set to 0 or 1 and allows you to resample the
specified axis for a DataFrame.
kind can be set to ‘timestamp’ or ‘period’ to convert the resulting index
to/from timestamp and time span representations. By default resample
retains the input representation.
convention can be set to ‘start’ or ‘end’ when resampling period data
(detail below). It specifies how low frequency periods are converted to higher
frequency periods.
Upsampling#
For upsampling, you can specify a way to upsample and the limit parameter to interpolate over the gaps that are created:
# from secondly to every 250 milliseconds
In [304]: ts[:2].resample("250L").asfreq()
Out[304]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 NaN
2012-01-01 00:00:00.500 NaN
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
In [305]: ts[:2].resample("250L").ffill()
Out[305]:
2012-01-01 00:00:00.000 308
2012-01-01 00:00:00.250 308
2012-01-01 00:00:00.500 308
2012-01-01 00:00:00.750 308
2012-01-01 00:00:01.000 204
Freq: 250L, dtype: int64
In [306]: ts[:2].resample("250L").ffill(limit=2)
Out[306]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 308.0
2012-01-01 00:00:00.500 308.0
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
Sparse resampling#
Sparse timeseries are the ones where you have a lot fewer points relative
to the amount of time you are looking to resample. Naively upsampling a sparse
series can potentially generate lots of intermediate values. When you don’t want
to use a method to fill these values, e.g. fill_method is None, then
intermediate values will be filled with NaN.
Since resample is a time-based groupby, the following is a method to efficiently
resample only the groups that are not all NaN.
In [307]: rng = pd.date_range("2014-1-1", periods=100, freq="D") + pd.Timedelta("1s")
In [308]: ts = pd.Series(range(100), index=rng)
If we want to resample to the full range of the series:
In [309]: ts.resample("3T").sum()
Out[309]:
2014-01-01 00:00:00 0
2014-01-01 00:03:00 0
2014-01-01 00:06:00 0
2014-01-01 00:09:00 0
2014-01-01 00:12:00 0
..
2014-04-09 23:48:00 0
2014-04-09 23:51:00 0
2014-04-09 23:54:00 0
2014-04-09 23:57:00 0
2014-04-10 00:00:00 99
Freq: 3T, Length: 47521, dtype: int64
We can instead only resample those groups where we have points as follows:
In [310]: from functools import partial
In [311]: from pandas.tseries.frequencies import to_offset
In [312]: def round(t, freq):
.....: freq = to_offset(freq)
.....: return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)
.....:
In [313]: ts.groupby(partial(round, freq="3T")).sum()
Out[313]:
2014-01-01 0
2014-01-02 1
2014-01-03 2
2014-01-04 3
2014-01-05 4
..
2014-04-06 95
2014-04-07 96
2014-04-08 97
2014-04-09 98
2014-04-10 99
Length: 100, dtype: int64
Aggregation#
Similar to the aggregating API, groupby API, and the window API,
a Resampler can be selectively resampled.
Resampling a DataFrame, the default will be to act on all columns with the same function.
In [314]: df = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2012", freq="S", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [315]: r = df.resample("3T")
In [316]: r.mean()
Out[316]:
A B C
2012-01-01 00:00:00 -0.033823 -0.121514 -0.081447
2012-01-01 00:03:00 0.056909 0.146731 -0.024320
2012-01-01 00:06:00 -0.058837 0.047046 -0.052021
2012-01-01 00:09:00 0.063123 -0.026158 -0.066533
2012-01-01 00:12:00 0.186340 -0.003144 0.074752
2012-01-01 00:15:00 -0.085954 -0.016287 -0.050046
We can select a specific column or columns using standard getitem.
In [317]: r["A"].mean()
Out[317]:
2012-01-01 00:00:00 -0.033823
2012-01-01 00:03:00 0.056909
2012-01-01 00:06:00 -0.058837
2012-01-01 00:09:00 0.063123
2012-01-01 00:12:00 0.186340
2012-01-01 00:15:00 -0.085954
Freq: 3T, Name: A, dtype: float64
In [318]: r[["A", "B"]].mean()
Out[318]:
A B
2012-01-01 00:00:00 -0.033823 -0.121514
2012-01-01 00:03:00 0.056909 0.146731
2012-01-01 00:06:00 -0.058837 0.047046
2012-01-01 00:09:00 0.063123 -0.026158
2012-01-01 00:12:00 0.186340 -0.003144
2012-01-01 00:15:00 -0.085954 -0.016287
You can pass a list or dict of functions to do aggregation with, outputting a DataFrame:
In [319]: r["A"].agg([np.sum, np.mean, np.std])
Out[319]:
sum mean std
2012-01-01 00:00:00 -6.088060 -0.033823 1.043263
2012-01-01 00:03:00 10.243678 0.056909 1.058534
2012-01-01 00:06:00 -10.590584 -0.058837 0.949264
2012-01-01 00:09:00 11.362228 0.063123 1.028096
2012-01-01 00:12:00 33.541257 0.186340 0.884586
2012-01-01 00:15:00 -8.595393 -0.085954 1.035476
On a resampled DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [320]: r.agg([np.sum, np.mean])
Out[320]:
A ... C
sum mean ... sum mean
2012-01-01 00:00:00 -6.088060 -0.033823 ... -14.660515 -0.081447
2012-01-01 00:03:00 10.243678 0.056909 ... -4.377642 -0.024320
2012-01-01 00:06:00 -10.590584 -0.058837 ... -9.363825 -0.052021
2012-01-01 00:09:00 11.362228 0.063123 ... -11.975895 -0.066533
2012-01-01 00:12:00 33.541257 0.186340 ... 13.455299 0.074752
2012-01-01 00:15:00 -8.595393 -0.085954 ... -5.004580 -0.050046
[6 rows x 6 columns]
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [321]: r.agg({"A": np.sum, "B": lambda x: np.std(x, ddof=1)})
Out[321]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
The function names can also be strings. In order for a string to be valid it
must be implemented on the resampled object:
In [322]: r.agg({"A": "sum", "B": "std"})
Out[322]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
Furthermore, you can also specify multiple aggregation functions for each column separately.
In [323]: r.agg({"A": ["sum", "std"], "B": ["mean", "std"]})
Out[323]:
A B
sum std mean std
2012-01-01 00:00:00 -6.088060 1.043263 -0.121514 1.001294
2012-01-01 00:03:00 10.243678 1.058534 0.146731 1.074597
2012-01-01 00:06:00 -10.590584 0.949264 0.047046 0.987309
2012-01-01 00:09:00 11.362228 1.028096 -0.026158 0.944953
2012-01-01 00:12:00 33.541257 0.884586 -0.003144 1.095025
2012-01-01 00:15:00 -8.595393 1.035476 -0.016287 1.035312
If a DataFrame does not have a datetimelike index, but instead you want
to resample based on datetimelike column in the frame, it can passed to the
on keyword.
In [324]: df = pd.DataFrame(
.....: {"date": pd.date_range("2015-01-01", freq="W", periods=5), "a": np.arange(5)},
.....: index=pd.MultiIndex.from_arrays(
.....: [[1, 2, 3, 4, 5], pd.date_range("2015-01-01", freq="W", periods=5)],
.....: names=["v", "d"],
.....: ),
.....: )
.....:
In [325]: df
Out[325]:
date a
v d
1 2015-01-04 2015-01-04 0
2 2015-01-11 2015-01-11 1
3 2015-01-18 2015-01-18 2
4 2015-01-25 2015-01-25 3
5 2015-02-01 2015-02-01 4
In [326]: df.resample("M", on="date")[["a"]].sum()
Out[326]:
a
date
2015-01-31 6
2015-02-28 4
Similarly, if you instead want to resample by a datetimelike
level of MultiIndex, its name or location can be passed to the
level keyword.
In [327]: df.resample("M", level="d")[["a"]].sum()
Out[327]:
a
d
2015-01-31 6
2015-02-28 4
Iterating through groups#
With the Resampler object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [328]: small = pd.Series(
.....: range(6),
.....: index=pd.to_datetime(
.....: [
.....: "2017-01-01T00:00:00",
.....: "2017-01-01T00:30:00",
.....: "2017-01-01T00:31:00",
.....: "2017-01-01T01:00:00",
.....: "2017-01-01T03:00:00",
.....: "2017-01-01T03:05:00",
.....: ]
.....: ),
.....: )
.....:
In [329]: resampled = small.resample("H")
In [330]: for name, group in resampled:
.....: print("Group: ", name)
.....: print("-" * 27)
.....: print(group, end="\n\n")
.....:
Group: 2017-01-01 00:00:00
---------------------------
2017-01-01 00:00:00 0
2017-01-01 00:30:00 1
2017-01-01 00:31:00 2
dtype: int64
Group: 2017-01-01 01:00:00
---------------------------
2017-01-01 01:00:00 3
dtype: int64
Group: 2017-01-01 02:00:00
---------------------------
Series([], dtype: int64)
Group: 2017-01-01 03:00:00
---------------------------
2017-01-01 03:00:00 4
2017-01-01 03:05:00 5
dtype: int64
See Iterating through groups or Resampler.__iter__ for more.
Use origin or offset to adjust the start of the bins#
New in version 1.1.0.
The bins of the grouping are adjusted based on the beginning of the day of the time series starting point. This works well with frequencies that are multiples of a day (like 30D) or that divide a day evenly (like 90s or 1min). This can create inconsistencies with some frequencies that do not meet this criteria. To change this behavior you can specify a fixed Timestamp with the argument origin.
For example:
In [331]: start, end = "2000-10-01 23:30:00", "2000-10-02 00:30:00"
In [332]: middle = "2000-10-02 00:00:00"
In [333]: rng = pd.date_range(start, end, freq="7min")
In [334]: ts = pd.Series(np.arange(len(rng)) * 3, index=rng)
In [335]: ts
Out[335]:
2000-10-01 23:30:00 0
2000-10-01 23:37:00 3
2000-10-01 23:44:00 6
2000-10-01 23:51:00 9
2000-10-01 23:58:00 12
2000-10-02 00:05:00 15
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
Freq: 7T, dtype: int64
Here we can see that, when using origin with its default value ('start_day'), the result after '2000-10-02 00:00:00' are not identical depending on the start of time series:
In [336]: ts.resample("17min", origin="start_day").sum()
Out[336]:
2000-10-01 23:14:00 0
2000-10-01 23:31:00 9
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
Freq: 17T, dtype: int64
In [337]: ts[middle:end].resample("17min", origin="start_day").sum()
Out[337]:
2000-10-02 00:00:00 33
2000-10-02 00:17:00 45
Freq: 17T, dtype: int64
Here we can see that, when setting origin to 'epoch', the result after '2000-10-02 00:00:00' are identical depending on the start of time series:
In [338]: ts.resample("17min", origin="epoch").sum()
Out[338]:
2000-10-01 23:18:00 0
2000-10-01 23:35:00 18
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
In [339]: ts[middle:end].resample("17min", origin="epoch").sum()
Out[339]:
2000-10-01 23:52:00 15
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
If needed you can use a custom timestamp for origin:
In [340]: ts.resample("17min", origin="2001-01-01").sum()
Out[340]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [341]: ts[middle:end].resample("17min", origin=pd.Timestamp("2001-01-01")).sum()
Out[341]:
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
If needed you can just adjust the bins with an offset Timedelta that would be added to the default origin.
Those two examples are equivalent for this time series:
In [342]: ts.resample("17min", origin="start").sum()
Out[342]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [343]: ts.resample("17min", offset="23h30min").sum()
Out[343]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
Note the use of 'start' for origin on the last example. In that case, origin will be set to the first value of the timeseries.
Backward resample#
New in version 1.3.0.
Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given freq. The backward resample sets closed to 'right' by default since the last value should be considered as the edge point for the last bin.
We can set origin to 'end'. The value for a specific Timestamp index stands for the resample result from the current Timestamp minus freq to the current Timestamp with a right close.
In [344]: ts.resample('17min', origin='end').sum()
Out[344]:
2000-10-01 23:35:00 0
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
Freq: 17T, dtype: int64
Besides, in contrast with the 'start_day' option, end_day is supported. This will set the origin as the ceiling midnight of the largest Timestamp.
In [345]: ts.resample('17min', origin='end_day').sum()
Out[345]:
2000-10-01 23:38:00 3
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
Freq: 17T, dtype: int64
The above result uses 2000-10-02 00:29:00 as the last bin’s right edge since the following computation.
In [346]: ceil_mid = rng.max().ceil('D')
In [347]: freq = pd.offsets.Minute(17)
In [348]: bin_res = ceil_mid - freq * ((ceil_mid - rng.max()) // freq)
In [349]: bin_res
Out[349]: Timestamp('2000-10-02 00:29:00')
Time span representation#
Regular intervals of time are represented by Period objects in pandas while
sequences of Period objects are collected in a PeriodIndex, which can
be created with the convenience function period_range.
Period#
A Period represents a span of time (e.g., a day, a month, a quarter, etc).
You can specify the span via freq keyword using a frequency alias like below.
Because freq represents a span of Period, it cannot be negative like “-3D”.
In [350]: pd.Period("2012", freq="A-DEC")
Out[350]: Period('2012', 'A-DEC')
In [351]: pd.Period("2012-1-1", freq="D")
Out[351]: Period('2012-01-01', 'D')
In [352]: pd.Period("2012-1-1 19:00", freq="H")
Out[352]: Period('2012-01-01 19:00', 'H')
In [353]: pd.Period("2012-1-1 19:00", freq="5H")
Out[353]: Period('2012-01-01 19:00', '5H')
Adding and subtracting integers from periods shifts the period by its own
frequency. Arithmetic is not allowed between Period with different freq (span).
In [354]: p = pd.Period("2012", freq="A-DEC")
In [355]: p + 1
Out[355]: Period('2013', 'A-DEC')
In [356]: p - 3
Out[356]: Period('2009', 'A-DEC')
In [357]: p = pd.Period("2012-01", freq="2M")
In [358]: p + 2
Out[358]: Period('2012-05', '2M')
In [359]: p - 1
Out[359]: Period('2011-11', '2M')
In [360]: p == pd.Period("2012-01", freq="3M")
Out[360]: False
If Period freq is daily or higher (D, H, T, S, L, U, N), offsets and timedelta-like can be added if the result can have the same freq. Otherwise, ValueError will be raised.
In [361]: p = pd.Period("2014-07-01 09:00", freq="H")
In [362]: p + pd.offsets.Hour(2)
Out[362]: Period('2014-07-01 11:00', 'H')
In [363]: p + datetime.timedelta(minutes=120)
Out[363]: Period('2014-07-01 11:00', 'H')
In [364]: p + np.timedelta64(7200, "s")
Out[364]: Period('2014-07-01 11:00', 'H')
In [1]: p + pd.offsets.Minute(5)
Traceback
...
ValueError: Input has different freq from Period(freq=H)
If Period has other frequencies, only the same offsets can be added. Otherwise, ValueError will be raised.
In [365]: p = pd.Period("2014-07", freq="M")
In [366]: p + pd.offsets.MonthEnd(3)
Out[366]: Period('2014-10', 'M')
In [1]: p + pd.offsets.MonthBegin(3)
Traceback
...
ValueError: Input has different freq from Period(freq=M)
Taking the difference of Period instances with the same frequency will
return the number of frequency units between them:
In [367]: pd.Period("2012", freq="A-DEC") - pd.Period("2002", freq="A-DEC")
Out[367]: <10 * YearEnds: month=12>
PeriodIndex and period_range#
Regular sequences of Period objects can be collected in a PeriodIndex,
which can be constructed using the period_range convenience function:
In [368]: prng = pd.period_range("1/1/2011", "1/1/2012", freq="M")
In [369]: prng
Out[369]:
PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05', '2011-06',
'2011-07', '2011-08', '2011-09', '2011-10', '2011-11', '2011-12',
'2012-01'],
dtype='period[M]')
The PeriodIndex constructor can also be used directly:
In [370]: pd.PeriodIndex(["2011-1", "2011-2", "2011-3"], freq="M")
Out[370]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
Passing multiplied frequency outputs a sequence of Period which
has multiplied span.
In [371]: pd.period_range(start="2014-01", freq="3M", periods=4)
Out[371]: PeriodIndex(['2014-01', '2014-04', '2014-07', '2014-10'], dtype='period[3M]')
If start or end are Period objects, they will be used as anchor
endpoints for a PeriodIndex with frequency matching that of the
PeriodIndex constructor.
In [372]: pd.period_range(
.....: start=pd.Period("2017Q1", freq="Q"), end=pd.Period("2017Q2", freq="Q"), freq="M"
.....: )
.....:
Out[372]: PeriodIndex(['2017-03', '2017-04', '2017-05', '2017-06'], dtype='period[M]')
Just like DatetimeIndex, a PeriodIndex can also be used to index pandas
objects:
In [373]: ps = pd.Series(np.random.randn(len(prng)), prng)
In [374]: ps
Out[374]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
PeriodIndex supports addition and subtraction with the same rule as Period.
In [375]: idx = pd.period_range("2014-07-01 09:00", periods=5, freq="H")
In [376]: idx
Out[376]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]')
In [377]: idx + pd.offsets.Hour(2)
Out[377]:
PeriodIndex(['2014-07-01 11:00', '2014-07-01 12:00', '2014-07-01 13:00',
'2014-07-01 14:00', '2014-07-01 15:00'],
dtype='period[H]')
In [378]: idx = pd.period_range("2014-07", periods=5, freq="M")
In [379]: idx
Out[379]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype='period[M]')
In [380]: idx + pd.offsets.MonthEnd(3)
Out[380]: PeriodIndex(['2014-10', '2014-11', '2014-12', '2015-01', '2015-02'], dtype='period[M]')
PeriodIndex has its own dtype named period, refer to Period Dtypes.
Period dtypes#
PeriodIndex has a custom period dtype. This is a pandas extension
dtype similar to the timezone aware dtype (datetime64[ns, tz]).
The period dtype holds the freq attribute and is represented with
period[freq] like period[D] or period[M], using frequency strings.
In [381]: pi = pd.period_range("2016-01-01", periods=3, freq="M")
In [382]: pi
Out[382]: PeriodIndex(['2016-01', '2016-02', '2016-03'], dtype='period[M]')
In [383]: pi.dtype
Out[383]: period[M]
The period dtype can be used in .astype(...). It allows one to change the
freq of a PeriodIndex like .asfreq() and convert a
DatetimeIndex to PeriodIndex like to_period():
# change monthly freq to daily freq
In [384]: pi.astype("period[D]")
Out[384]: PeriodIndex(['2016-01-31', '2016-02-29', '2016-03-31'], dtype='period[D]')
# convert to DatetimeIndex
In [385]: pi.astype("datetime64[ns]")
Out[385]: DatetimeIndex(['2016-01-01', '2016-02-01', '2016-03-01'], dtype='datetime64[ns]', freq='MS')
# convert to PeriodIndex
In [386]: dti = pd.date_range("2011-01-01", freq="M", periods=3)
In [387]: dti
Out[387]: DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'], dtype='datetime64[ns]', freq='M')
In [388]: dti.astype("period[M]")
Out[388]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
PeriodIndex partial string indexing#
PeriodIndex now supports partial string slicing with non-monotonic indexes.
New in version 1.1.0.
You can pass in dates and strings to Series and DataFrame with PeriodIndex, in the same manner as DatetimeIndex. For details, refer to DatetimeIndex Partial String Indexing.
In [389]: ps["2011-01"]
Out[389]: -2.9169013294054507
In [390]: ps[datetime.datetime(2011, 12, 25):]
Out[390]:
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
In [391]: ps["10/31/2011":"12/31/2011"]
Out[391]:
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
Passing a string representing a lower frequency than PeriodIndex returns partial sliced data.
In [392]: ps["2011"]
Out[392]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
In [393]: dfp = pd.DataFrame(
.....: np.random.randn(600, 1),
.....: columns=["A"],
.....: index=pd.period_range("2013-01-01 9:00", periods=600, freq="T"),
.....: )
.....:
In [394]: dfp
Out[394]:
A
2013-01-01 09:00 -0.538468
2013-01-01 09:01 -1.365819
2013-01-01 09:02 -0.969051
2013-01-01 09:03 -0.331152
2013-01-01 09:04 -0.245334
... ...
2013-01-01 18:55 0.522460
2013-01-01 18:56 0.118710
2013-01-01 18:57 0.167517
2013-01-01 18:58 0.922883
2013-01-01 18:59 1.721104
[600 rows x 1 columns]
In [395]: dfp.loc["2013-01-01 10H"]
Out[395]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 10:55 -0.865621
2013-01-01 10:56 -1.167818
2013-01-01 10:57 -2.081748
2013-01-01 10:58 -0.527146
2013-01-01 10:59 0.802298
[60 rows x 1 columns]
As with DatetimeIndex, the endpoints will be included in the result. The example below slices data starting from 10:00 to 11:59.
In [396]: dfp["2013-01-01 10H":"2013-01-01 11H"]
Out[396]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 11:55 -0.590204
2013-01-01 11:56 1.539990
2013-01-01 11:57 -1.224826
2013-01-01 11:58 0.578798
2013-01-01 11:59 -0.685496
[120 rows x 1 columns]
Frequency conversion and resampling with PeriodIndex#
The frequency of Period and PeriodIndex can be converted via the asfreq
method. Let’s start with the fiscal year 2011, ending in December:
In [397]: p = pd.Period("2011", freq="A-DEC")
In [398]: p
Out[398]: Period('2011', 'A-DEC')
We can convert it to a monthly frequency. Using the how parameter, we can
specify whether to return the starting or ending month:
In [399]: p.asfreq("M", how="start")
Out[399]: Period('2011-01', 'M')
In [400]: p.asfreq("M", how="end")
Out[400]: Period('2011-12', 'M')
The shorthands ‘s’ and ‘e’ are provided for convenience:
In [401]: p.asfreq("M", "s")
Out[401]: Period('2011-01', 'M')
In [402]: p.asfreq("M", "e")
Out[402]: Period('2011-12', 'M')
Converting to a “super-period” (e.g., annual frequency is a super-period of
quarterly frequency) automatically returns the super-period that includes the
input period:
In [403]: p = pd.Period("2011-12", freq="M")
In [404]: p.asfreq("A-NOV")
Out[404]: Period('2012', 'A-NOV')
Note that since we converted to an annual frequency that ends the year in
November, the monthly period of December 2011 is actually in the 2012 A-NOV
period.
Period conversions with anchored frequencies are particularly useful for
working with various quarterly data common to economics, business, and other
fields. Many organizations define quarters relative to the month in which their
fiscal year starts and ends. Thus, first quarter of 2011 could start in 2010 or
a few months into 2011. Via anchored frequencies, pandas works for all quarterly
frequencies Q-JAN through Q-DEC.
Q-DEC define regular calendar quarters:
In [405]: p = pd.Period("2012Q1", freq="Q-DEC")
In [406]: p.asfreq("D", "s")
Out[406]: Period('2012-01-01', 'D')
In [407]: p.asfreq("D", "e")
Out[407]: Period('2012-03-31', 'D')
Q-MAR defines fiscal year end in March:
In [408]: p = pd.Period("2011Q4", freq="Q-MAR")
In [409]: p.asfreq("D", "s")
Out[409]: Period('2011-01-01', 'D')
In [410]: p.asfreq("D", "e")
Out[410]: Period('2011-03-31', 'D')
Converting between representations#
Timestamped data can be converted to PeriodIndex-ed data using to_period
and vice-versa using to_timestamp:
In [411]: rng = pd.date_range("1/1/2012", periods=5, freq="M")
In [412]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [413]: ts
Out[413]:
2012-01-31 1.931253
2012-02-29 -0.184594
2012-03-31 0.249656
2012-04-30 -0.978151
2012-05-31 -0.873389
Freq: M, dtype: float64
In [414]: ps = ts.to_period()
In [415]: ps
Out[415]:
2012-01 1.931253
2012-02 -0.184594
2012-03 0.249656
2012-04 -0.978151
2012-05 -0.873389
Freq: M, dtype: float64
In [416]: ps.to_timestamp()
Out[416]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Remember that ‘s’ and ‘e’ can be used to return the timestamps at the start or
end of the period:
In [417]: ps.to_timestamp("D", how="s")
Out[417]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic
functions to be used. In the following example, we convert a quarterly
frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [418]: prng = pd.period_range("1990Q1", "2000Q4", freq="Q-NOV")
In [419]: ts = pd.Series(np.random.randn(len(prng)), prng)
In [420]: ts.index = (prng.asfreq("M", "e") + 1).asfreq("H", "s") + 9
In [421]: ts.head()
Out[421]:
1990-03-01 09:00 -0.109291
1990-06-01 09:00 -0.637235
1990-09-01 09:00 -1.735925
1990-12-01 09:00 2.096946
1991-03-01 09:00 -1.039926
Freq: H, dtype: float64
Representing out-of-bounds spans#
If you have data that is outside of the Timestamp bounds, see Timestamp limitations,
then you can use a PeriodIndex and/or Series of Periods to do computations.
In [422]: span = pd.period_range("1215-01-01", "1381-01-01", freq="D")
In [423]: span
Out[423]:
PeriodIndex(['1215-01-01', '1215-01-02', '1215-01-03', '1215-01-04',
'1215-01-05', '1215-01-06', '1215-01-07', '1215-01-08',
'1215-01-09', '1215-01-10',
...
'1380-12-23', '1380-12-24', '1380-12-25', '1380-12-26',
'1380-12-27', '1380-12-28', '1380-12-29', '1380-12-30',
'1380-12-31', '1381-01-01'],
dtype='period[D]', length=60632)
To convert from an int64 based YYYYMMDD representation.
In [424]: s = pd.Series([20121231, 20141130, 99991231])
In [425]: s
Out[425]:
0 20121231
1 20141130
2 99991231
dtype: int64
In [426]: def conv(x):
.....: return pd.Period(year=x // 10000, month=x // 100 % 100, day=x % 100, freq="D")
.....:
In [427]: s.apply(conv)
Out[427]:
0 2012-12-31
1 2014-11-30
2 9999-12-31
dtype: period[D]
In [428]: s.apply(conv)[2]
Out[428]: Period('9999-12-31', 'D')
These can easily be converted to a PeriodIndex:
In [429]: span = pd.PeriodIndex(s.apply(conv))
In [430]: span
Out[430]: PeriodIndex(['2012-12-31', '2014-11-30', '9999-12-31'], dtype='period[D]')
Time zone handling#
pandas provides rich support for working with timestamps in different time
zones using the pytz and dateutil libraries or datetime.timezone
objects from the standard library.
Working with time zones#
By default, pandas objects are time zone unaware:
In [431]: rng = pd.date_range("3/6/2012 00:00", periods=15, freq="D")
In [432]: rng.tz is None
Out[432]: True
To localize these dates to a time zone (assign a particular time zone to a naive date),
you can use the tz_localize method or the tz keyword argument in
date_range(), Timestamp, or DatetimeIndex.
You can either pass pytz or dateutil time zone objects or Olson time zone database strings.
Olson time zone strings will return pytz time zone objects by default.
To return dateutil time zone objects, append dateutil/ before the string.
In pytz you can find a list of common (and less common) time zones using
from pytz import common_timezones, all_timezones.
dateutil uses the OS time zones so there isn’t a fixed list available. For
common zones, the names are the same as pytz.
In [433]: import dateutil
# pytz
In [434]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz="Europe/London")
In [435]: rng_pytz.tz
Out[435]: <DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD>
# dateutil
In [436]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [437]: rng_dateutil = rng_dateutil.tz_localize("dateutil/Europe/London")
In [438]: rng_dateutil.tz
Out[438]: tzfile('/usr/share/zoneinfo/Europe/London')
# dateutil - utc special case
In [439]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=dateutil.tz.tzutc(),
.....: )
.....:
In [440]: rng_utc.tz
Out[440]: tzutc()
New in version 0.25.0.
# datetime.timezone
In [441]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=datetime.timezone.utc,
.....: )
.....:
In [442]: rng_utc.tz
Out[442]: datetime.timezone.utc
Note that the UTC time zone is a special case in dateutil and should be constructed explicitly
as an instance of dateutil.tz.tzutc. You can also construct other time
zones objects explicitly first.
In [443]: import pytz
# pytz
In [444]: tz_pytz = pytz.timezone("Europe/London")
In [445]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [446]: rng_pytz = rng_pytz.tz_localize(tz_pytz)
In [447]: rng_pytz.tz == tz_pytz
Out[447]: True
# dateutil
In [448]: tz_dateutil = dateutil.tz.gettz("Europe/London")
In [449]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz=tz_dateutil)
In [450]: rng_dateutil.tz == tz_dateutil
Out[450]: True
To convert a time zone aware pandas object from one time zone to another,
you can use the tz_convert method.
In [451]: rng_pytz.tz_convert("US/Eastern")
Out[451]:
DatetimeIndex(['2012-03-05 19:00:00-05:00', '2012-03-06 19:00:00-05:00',
'2012-03-07 19:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Note
When using pytz time zones, DatetimeIndex will construct a different
time zone object than a Timestamp for the same time zone input. A DatetimeIndex
can hold a collection of Timestamp objects that may have different UTC offsets and cannot be
succinctly represented by one pytz time zone instance while one Timestamp
represents one point in time with a specific UTC offset.
In [452]: dti = pd.date_range("2019-01-01", periods=3, freq="D", tz="US/Pacific")
In [453]: dti.tz
Out[453]: <DstTzInfo 'US/Pacific' LMT-1 day, 16:07:00 STD>
In [454]: ts = pd.Timestamp("2019-01-01", tz="US/Pacific")
In [455]: ts.tz
Out[455]: <DstTzInfo 'US/Pacific' PST-1 day, 16:00:00 STD>
Warning
Be wary of conversions between libraries. For some time zones, pytz and dateutil have different
definitions of the zone. This is more of a problem for unusual time zones than for
‘standard’ zones like US/Eastern.
Warning
Be aware that a time zone definition across versions of time zone libraries may not
be considered equal. This may cause problems when working with stored data that
is localized using one version and operated on with a different version.
See here for how to handle such a situation.
Warning
For pytz time zones, it is incorrect to pass a time zone object directly into
the datetime.datetime constructor
(e.g., datetime.datetime(2011, 1, 1, tzinfo=pytz.timezone('US/Eastern')).
Instead, the datetime needs to be localized using the localize method
on the pytz time zone object.
Warning
Be aware that for times in the future, correct conversion between time zones
(and UTC) cannot be guaranteed by any time zone library because a timezone’s
offset from UTC may be changed by the respective government.
Warning
If you are using dates beyond 2038-01-18, due to current deficiencies
in the underlying libraries caused by the year 2038 problem, daylight saving time (DST) adjustments
to timezone aware dates will not be applied. If and when the underlying libraries are fixed,
the DST transitions will be applied.
For example, for two dates that are in British Summer Time (and so would normally be GMT+1), both the following asserts evaluate as true:
In [456]: d_2037 = "2037-03-31T010101"
In [457]: d_2038 = "2038-03-31T010101"
In [458]: DST = "Europe/London"
In [459]: assert pd.Timestamp(d_2037, tz=DST) != pd.Timestamp(d_2037, tz="GMT")
In [460]: assert pd.Timestamp(d_2038, tz=DST) == pd.Timestamp(d_2038, tz="GMT")
Under the hood, all timestamps are stored in UTC. Values from a time zone aware
DatetimeIndex or Timestamp will have their fields (day, hour, minute, etc.)
localized to the time zone. However, timestamps with the same UTC value are
still considered to be equal even if they are in different time zones:
In [461]: rng_eastern = rng_utc.tz_convert("US/Eastern")
In [462]: rng_berlin = rng_utc.tz_convert("Europe/Berlin")
In [463]: rng_eastern[2]
Out[463]: Timestamp('2012-03-07 19:00:00-0500', tz='US/Eastern', freq='D')
In [464]: rng_berlin[2]
Out[464]: Timestamp('2012-03-08 01:00:00+0100', tz='Europe/Berlin', freq='D')
In [465]: rng_eastern[2] == rng_berlin[2]
Out[465]: True
Operations between Series in different time zones will yield UTC
Series, aligning the data on the UTC timestamps:
In [466]: ts_utc = pd.Series(range(3), pd.date_range("20130101", periods=3, tz="UTC"))
In [467]: eastern = ts_utc.tz_convert("US/Eastern")
In [468]: berlin = ts_utc.tz_convert("Europe/Berlin")
In [469]: result = eastern + berlin
In [470]: result
Out[470]:
2013-01-01 00:00:00+00:00 0
2013-01-02 00:00:00+00:00 2
2013-01-03 00:00:00+00:00 4
Freq: D, dtype: int64
In [471]: result.index
Out[471]:
DatetimeIndex(['2013-01-01 00:00:00+00:00', '2013-01-02 00:00:00+00:00',
'2013-01-03 00:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='D')
To remove time zone information, use tz_localize(None) or tz_convert(None).
tz_localize(None) will remove the time zone yielding the local time representation.
tz_convert(None) will remove the time zone after converting to UTC time.
In [472]: didx = pd.date_range(start="2014-08-01 09:00", freq="H", periods=3, tz="US/Eastern")
In [473]: didx
Out[473]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [474]: didx.tz_localize(None)
Out[474]:
DatetimeIndex(['2014-08-01 09:00:00', '2014-08-01 10:00:00',
'2014-08-01 11:00:00'],
dtype='datetime64[ns]', freq=None)
In [475]: didx.tz_convert(None)
Out[475]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq='H')
# tz_convert(None) is identical to tz_convert('UTC').tz_localize(None)
In [476]: didx.tz_convert("UTC").tz_localize(None)
Out[476]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq=None)
Fold#
New in version 1.1.0.
For ambiguous times, pandas supports explicitly specifying the keyword-only fold argument.
Due to daylight saving time, one wall clock time can occur twice when shifting
from summer to winter time; fold describes whether the datetime-like corresponds
to the first (0) or the second time (1) the wall clock hits the ambiguous time.
Fold is supported only for constructing from naive datetime.datetime
(see datetime documentation for details) or from Timestamp
or for constructing from components (see below). Only dateutil timezones are supported
(see dateutil documentation
for dateutil methods that deal with ambiguous datetimes) as pytz
timezones do not support fold (see pytz documentation
for details on how pytz deals with ambiguous datetimes). To localize an ambiguous datetime
with pytz, please use Timestamp.tz_localize(). In general, we recommend to rely
on Timestamp.tz_localize() when localizing ambiguous datetimes if you need direct
control over how they are handled.
In [477]: pd.Timestamp(
.....: datetime.datetime(2019, 10, 27, 1, 30, 0, 0),
.....: tz="dateutil/Europe/London",
.....: fold=0,
.....: )
.....:
Out[477]: Timestamp('2019-10-27 01:30:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London')
In [478]: pd.Timestamp(
.....: year=2019,
.....: month=10,
.....: day=27,
.....: hour=1,
.....: minute=30,
.....: tz="dateutil/Europe/London",
.....: fold=1,
.....: )
.....:
Out[478]: Timestamp('2019-10-27 01:30:00+0000', tz='dateutil//usr/share/zoneinfo/Europe/London')
Ambiguous times when localizing#
tz_localize may not be able to determine the UTC offset of a timestamp
because daylight savings time (DST) in a local time zone causes some times to occur
twice within one day (“clocks fall back”). The following options are available:
'raise': Raises a pytz.AmbiguousTimeError (the default behavior)
'infer': Attempt to determine the correct offset base on the monotonicity of the timestamps
'NaT': Replaces ambiguous times with NaT
bool: True represents a DST time, False represents non-DST time. An array-like of bool values is supported for a sequence of times.
In [479]: rng_hourly = pd.DatetimeIndex(
.....: ["11/06/2011 00:00", "11/06/2011 01:00", "11/06/2011 01:00", "11/06/2011 02:00"]
.....: )
.....:
This will fail as there are ambiguous times ('11/06/2011 01:00')
In [2]: rng_hourly.tz_localize('US/Eastern')
AmbiguousTimeError: Cannot infer dst time from Timestamp('2011-11-06 01:00:00'), try using the 'ambiguous' argument
Handle these ambiguous times by specifying the following.
In [480]: rng_hourly.tz_localize("US/Eastern", ambiguous="infer")
Out[480]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [481]: rng_hourly.tz_localize("US/Eastern", ambiguous="NaT")
Out[481]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', 'NaT', 'NaT',
'2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [482]: rng_hourly.tz_localize("US/Eastern", ambiguous=[True, True, False, False])
Out[482]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Nonexistent times when localizing#
A DST transition may also shift the local time ahead by 1 hour creating nonexistent
local times (“clocks spring forward”). The behavior of localizing a timeseries with nonexistent times
can be controlled by the nonexistent argument. The following options are available:
'raise': Raises a pytz.NonExistentTimeError (the default behavior)
'NaT': Replaces nonexistent times with NaT
'shift_forward': Shifts nonexistent times forward to the closest real time
'shift_backward': Shifts nonexistent times backward to the closest real time
timedelta object: Shifts nonexistent times by the timedelta duration
In [483]: dti = pd.date_range(start="2015-03-29 02:30:00", periods=3, freq="H")
# 2:30 is a nonexistent time
Localization of nonexistent times will raise an error by default.
In [2]: dti.tz_localize('Europe/Warsaw')
NonExistentTimeError: 2015-03-29 02:30:00
Transform nonexistent times to NaT or shift the times.
In [484]: dti
Out[484]:
DatetimeIndex(['2015-03-29 02:30:00', '2015-03-29 03:30:00',
'2015-03-29 04:30:00'],
dtype='datetime64[ns]', freq='H')
In [485]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_forward")
Out[485]:
DatetimeIndex(['2015-03-29 03:00:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [486]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_backward")
Out[486]:
DatetimeIndex(['2015-03-29 01:59:59.999999999+01:00',
'2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [487]: dti.tz_localize("Europe/Warsaw", nonexistent=pd.Timedelta(1, unit="H"))
Out[487]:
DatetimeIndex(['2015-03-29 03:30:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [488]: dti.tz_localize("Europe/Warsaw", nonexistent="NaT")
Out[488]:
DatetimeIndex(['NaT', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
Time zone Series operations#
A Series with time zone naive values is
represented with a dtype of datetime64[ns].
In [489]: s_naive = pd.Series(pd.date_range("20130101", periods=3))
In [490]: s_naive
Out[490]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
dtype: datetime64[ns]
A Series with a time zone aware values is
represented with a dtype of datetime64[ns, tz] where tz is the time zone
In [491]: s_aware = pd.Series(pd.date_range("20130101", periods=3, tz="US/Eastern"))
In [492]: s_aware
Out[492]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Both of these Series time zone information
can be manipulated via the .dt accessor, see the dt accessor section.
For example, to localize and convert a naive stamp to time zone aware.
In [493]: s_naive.dt.tz_localize("UTC").dt.tz_convert("US/Eastern")
Out[493]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Time zone information can also be manipulated using the astype method.
This method can convert between different timezone-aware dtypes.
# convert to a new time zone
In [494]: s_aware.astype("datetime64[ns, CET]")
Out[494]:
0 2013-01-01 06:00:00+01:00
1 2013-01-02 06:00:00+01:00
2 2013-01-03 06:00:00+01:00
dtype: datetime64[ns, CET]
Note
Using Series.to_numpy() on a Series, returns a NumPy array of the data.
NumPy does not currently support time zones (even though it is printing in the local time zone!),
therefore an object array of Timestamps is returned for time zone aware data:
In [495]: s_naive.to_numpy()
Out[495]:
array(['2013-01-01T00:00:00.000000000', '2013-01-02T00:00:00.000000000',
'2013-01-03T00:00:00.000000000'], dtype='datetime64[ns]')
In [496]: s_aware.to_numpy()
Out[496]:
array([Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-02 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')],
dtype=object)
By converting to an object array of Timestamps, it preserves the time zone
information. For example, when converting back to a Series:
In [497]: pd.Series(s_aware.to_numpy())
Out[497]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
However, if you want an actual NumPy datetime64[ns] array (with the values
converted to UTC) instead of an array of objects, you can specify the
dtype argument:
In [498]: s_aware.to_numpy(dtype="datetime64[ns]")
Out[498]:
array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')
| 346
| 1,128
|
Pandas- How to work with pandas .dt.time timestamps
I have a dataframe df_to_db:
enter_dt_tm
exit_dt_tm
total_time
08:00-23:00
2021-07-20 13:24:00
2021-07-20 22:51:00
0 days 09:27:00
0 days 09:27:00
2021-05-31 02:52:00
2021-05-31 06:16:00
0 days 03:24:00
0 days 00:00:00
2021-06-01 23:50:00
2021-06-01 23:58:00
0 days 00:08:00
0 days 00:00:00
2021-06-01 23:58:00
2021-06-02 07:58:00
0 days 08:00:00
0 days 00:00:00
2021-06-01 07:59:00
2021-06-01 23:12:00
0 days 15:13:00
0 days 15:00:00
And I want to print only the rows where there are greater than 15 minutes between "08:00-23:00" AND the "enter_dt_tm" timestamp is not between 07:40:00 am and 08:00:00 am (this would eliminate all but the first row).
I am writing the code as follows but am having issues:
test_df = df_to_db[df_to_db['08:00-23:00'] >= '0 days 00:15:00' & df_to_db['enter_dt_tm'].dt.time <= '07:40:00' & df_to_db['enter_dt_tm'].dt.time >= '08:00:00']
print(test_df)
The first portion where I specify 08:00-23:00 is greater than 15 minutes works, but I can't seem to figure out how to work with .dt.time to get my code to work. I have tried writing it in different formats with no luck (e.g. (7, 40, 0), 7, 40, 0) etc.
Any help would be greatly appreciated.
EDIT: The error I am receiving is: TypeError: '<=' not supported between instances of 'datetime.time' and 'str'
|
60,444,733
|
Drop consecutive duplicates across multiple columns - Pandas
|
<p>There's a few questions on this but not using location based indexing of multiple columns: <a href="https://stackoverflow.com/questions/19463985/pandas-drop-consecutive-duplicates">Pandas: Drop consecutive duplicates</a>.</p>
<p>I have a <code>df</code> that may contain consecutive duplicate values across specific rows. I want to remove them for the last two columns only. Using the <code>df</code> below, I want to drop rows where values in <code>year</code> and <code>sale</code> are the same.</p>
<p>I'm getting an error using the query below.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'month': [1, 4, 7, 10, 12, 12],
'year': ['12', '14', '14', '13', '15', '15'],
'sale': ['55', '40', '40', '84', '31', '32']})
cols = df.iloc[:,1:3]
# Option 1
df = df.loc[df[cols] != df['cols'].shift()].reset_index(drop = True)
</code></pre>
<blockquote>
<p>ValueError: Must pass DataFrame with boolean values only</p>
</blockquote>
<pre><code># Option 2
df = df[df.iloc[:,1:3].diff().ne(0).any(1)].reset_index(drop = True)
</code></pre>
<blockquote>
<p>TypeError: unsupported operand type(s) for -: 'str' and 'str'</p>
</blockquote>
<p>Intended Output:</p>
<pre><code> month year sale
0 1 2012 55
1 4 2014 40
3 10 2013 84
4 12 2014 31
5 12 2014 32
</code></pre>
<p>Notes: </p>
<p>1) I need to use index label for selecting columns as the labels will change. I need something fluid.</p>
<p>2) <code>drop_duplicates</code> isn't appropriate here as I only want to drop rows that are the same as the previous row. I don't want to drop the same value altogether. </p>
| 60,444,811
| 2020-02-28T03:12:54.457000
| 1
| null | 0
| 737
|
python|pandas
|
<p><em>I want to drop rows where values in <code>year</code> and <code>sale</code> are the same</em> That means you can calculate the difference, check if they are equal zero on <code>year</code> and <code>sale</code>:</p>
<pre><code># if the data are numeric
# s = df[['year','sale']].diff().ne(0).any(1)
s = df[['year','sale']].ne(df[['year','sale']].shift()).any(1)
df[s]
</code></pre>
<p>Output:</p>
<pre><code> month year sale
0 1 2012 55
1 4 2014 40
3 10 2013 84
4 12 2014 31
5 12 2014 32
</code></pre>
| 2020-02-28T03:22:24.867000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html
|
pandas.DataFrame.interpolate#
pandas.DataFrame.interpolate#
DataFrame.interpolate(method='linear', *, axis=0, limit=None, inplace=False, limit_direction=None, limit_area=None, downcast=None, **kwargs)[source]#
Fill NaN values using an interpolation method.
Please note that only method='linear' is supported for
DataFrame/Series with a MultiIndex.
Parameters
methodstr, default ‘linear’Interpolation technique to use. One of:
‘linear’: Ignore the index and treat the values as equally
spaced. This is the only method supported on MultiIndexes.
I want to drop rows where values in year and sale are the same That means you can calculate the difference, check if they are equal zero on year and sale:
# if the data are numeric
# s = df[['year','sale']].diff().ne(0).any(1)
s = df[['year','sale']].ne(df[['year','sale']].shift()).any(1)
df[s]
Output:
month year sale
0 1 2012 55
1 4 2014 40
3 10 2013 84
4 12 2014 31
5 12 2014 32
‘time’: Works on daily and higher resolution data to interpolate
given length of interval.
‘index’, ‘values’: use the actual numerical values of the index.
‘pad’: Fill in NaNs using existing values.
‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘spline’,
‘barycentric’, ‘polynomial’: Passed to
scipy.interpolate.interp1d. These methods use the numerical
values of the index. Both ‘polynomial’ and ‘spline’ require that
you also specify an order (int), e.g.
df.interpolate(method='polynomial', order=5).
‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’, ‘akima’,
‘cubicspline’: Wrappers around the SciPy interpolation methods of
similar names. See Notes.
‘from_derivatives’: Refers to
scipy.interpolate.BPoly.from_derivatives which
replaces ‘piecewise_polynomial’ interpolation method in
scipy 0.18.
axis{{0 or ‘index’, 1 or ‘columns’, None}}, default NoneAxis to interpolate along. For Series this parameter is unused
and defaults to 0.
limitint, optionalMaximum number of consecutive NaNs to fill. Must be greater than
0.
inplacebool, default FalseUpdate the data in place if possible.
limit_direction{{‘forward’, ‘backward’, ‘both’}}, OptionalConsecutive NaNs will be filled in this direction.
If limit is specified:
If ‘method’ is ‘pad’ or ‘ffill’, ‘limit_direction’ must be ‘forward’.
If ‘method’ is ‘backfill’ or ‘bfill’, ‘limit_direction’ must be
‘backwards’.
If ‘limit’ is not specified:
If ‘method’ is ‘backfill’ or ‘bfill’, the default is ‘backward’
else the default is ‘forward’
Changed in version 1.1.0: raises ValueError if limit_direction is ‘forward’ or ‘both’ and
method is ‘backfill’ or ‘bfill’.
raises ValueError if limit_direction is ‘backward’ or ‘both’ and
method is ‘pad’ or ‘ffill’.
limit_area{{None, ‘inside’, ‘outside’}}, default NoneIf limit is specified, consecutive NaNs will be filled with this
restriction.
None: No fill restriction.
‘inside’: Only fill NaNs surrounded by valid values
(interpolate).
‘outside’: Only fill NaNs outside valid values (extrapolate).
downcastoptional, ‘infer’ or None, defaults to NoneDowncast dtypes if possible.
``**kwargs``optionalKeyword arguments to pass on to the interpolating function.
Returns
Series or DataFrame or NoneReturns the same object type as the caller, interpolated at
some or all NaN values or None if inplace=True.
See also
fillnaFill missing values using different methods.
scipy.interpolate.Akima1DInterpolatorPiecewise cubic polynomials (Akima interpolator).
scipy.interpolate.BPoly.from_derivativesPiecewise polynomial in the Bernstein basis.
scipy.interpolate.interp1dInterpolate a 1-D function.
scipy.interpolate.KroghInterpolatorInterpolate polynomial (Krogh interpolator).
scipy.interpolate.PchipInterpolatorPCHIP 1-d monotonic cubic interpolation.
scipy.interpolate.CubicSplineCubic spline data interpolator.
Notes
The ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’ and ‘akima’
methods are wrappers around the respective SciPy implementations of
similar names. These use the actual numerical values of the index.
For more information on their behavior, see the
SciPy documentation.
Examples
Filling in NaN in a Series via linear
interpolation.
>>> s = pd.Series([0, 1, np.nan, 3])
>>> s
0 0.0
1 1.0
2 NaN
3 3.0
dtype: float64
>>> s.interpolate()
0 0.0
1 1.0
2 2.0
3 3.0
dtype: float64
Filling in NaN in a Series by padding, but filling at most two
consecutive NaN at a time.
>>> s = pd.Series([np.nan, "single_one", np.nan,
... "fill_two_more", np.nan, np.nan, np.nan,
... 4.71, np.nan])
>>> s
0 NaN
1 single_one
2 NaN
3 fill_two_more
4 NaN
5 NaN
6 NaN
7 4.71
8 NaN
dtype: object
>>> s.interpolate(method='pad', limit=2)
0 NaN
1 single_one
2 single_one
3 fill_two_more
4 fill_two_more
5 fill_two_more
6 NaN
7 4.71
8 4.71
dtype: object
Filling in NaN in a Series via polynomial interpolation or splines:
Both ‘polynomial’ and ‘spline’ methods require that you also specify
an order (int).
>>> s = pd.Series([0, 2, np.nan, 8])
>>> s.interpolate(method='polynomial', order=2)
0 0.000000
1 2.000000
2 4.666667
3 8.000000
dtype: float64
Fill the DataFrame forward (that is, going down) along each column
using linear interpolation.
Note how the last entry in column ‘a’ is interpolated differently,
because there is no entry after it to use for interpolation.
Note how the first entry in column ‘b’ remains NaN, because there
is no entry before it to use for interpolation.
>>> df = pd.DataFrame([(0.0, np.nan, -1.0, 1.0),
... (np.nan, 2.0, np.nan, np.nan),
... (2.0, 3.0, np.nan, 9.0),
... (np.nan, 4.0, -4.0, 16.0)],
... columns=list('abcd'))
>>> df
a b c d
0 0.0 NaN -1.0 1.0
1 NaN 2.0 NaN NaN
2 2.0 3.0 NaN 9.0
3 NaN 4.0 -4.0 16.0
>>> df.interpolate(method='linear', limit_direction='forward', axis=0)
a b c d
0 0.0 NaN -1.0 1.0
1 1.0 2.0 -2.0 5.0
2 2.0 3.0 -3.0 9.0
3 2.0 4.0 -4.0 16.0
Using polynomial interpolation.
>>> df['d'].interpolate(method='polynomial', order=2)
0 1.0
1 4.0
2 9.0
3 16.0
Name: d, dtype: float64
| 551
| 983
|
Drop consecutive duplicates across multiple columns - Pandas
There's a few questions on this but not using location based indexing of multiple columns: Pandas: Drop consecutive duplicates.
I have a df that may contain consecutive duplicate values across specific rows. I want to remove them for the last two columns only. Using the df below, I want to drop rows where values in year and sale are the same.
I'm getting an error using the query below.
import pandas as pd
df = pd.DataFrame({'month': [1, 4, 7, 10, 12, 12],
'year': ['12', '14', '14', '13', '15', '15'],
'sale': ['55', '40', '40', '84', '31', '32']})
cols = df.iloc[:,1:3]
# Option 1
df = df.loc[df[cols] != df['cols'].shift()].reset_index(drop = True)
ValueError: Must pass DataFrame with boolean values only
# Option 2
df = df[df.iloc[:,1:3].diff().ne(0).any(1)].reset_index(drop = True)
TypeError: unsupported operand type(s) for -: 'str' and 'str'
Intended Output:
month year sale
0 1 2012 55
1 4 2014 40
3 10 2013 84
4 12 2014 31
5 12 2014 32
Notes:
1) I need to use index label for selecting columns as the labels will change. I need something fluid.
2) drop_duplicates isn't appropriate here as I only want to drop rows that are the same as the previous row. I don't want to drop the same value altogether.
|
63,694,748
|
Apply an if/then condition over all elements of a Pandas dataframe
|
<p>I need to apply a very simple if/then function to every element in a Pandas Dataframe.</p>
<p>If the value of <em>any</em> element is over 0.5, I need to return a 1. Otherwise, I need to return a 0.</p>
<p>This seemed really simple with a lambda function, but every time I try I get an error: <code>'ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()'</code></p>
<p>So far I have:</p>
<pre><code>df_new = df.apply(lambda x: 1 if x > 0.5 else 1)
</code></pre>
<p>I'd be grateful for any help.</p>
| 63,694,908
| 2020-09-01T20:08:13.210000
| 2
| null | 2
| 254
|
python|pandas
|
<p>You should use <code>applymap</code> instead because you want the operation to be performed on each element in your dataframe, not each column, which is what <code>apply</code> does.</p>
<pre><code>df = pd.DataFrame({"A": [0.1, 0.2, 0.5, 0.6, 0.7],
"B": [0.75, 0.85, 0.2, 0.9, 0.0],
"C": [0.2, 0.51, 0.49, 0.3, 0.1]})
print(df)
A B C
0 0.1 0.75 0.20
1 0.2 0.85 0.51
2 0.5 0.20 0.49
3 0.6 0.90 0.30
4 0.7 0.00 0.10
df_new = df.applymap(lambda x: 1 if x > 0.5 else 0)
print(df_new)
A B C
0 0 1 0
1 0 1 1
2 0 0 0
3 1 1 0
4 1 0 0
</code></pre>
| 2020-09-01T20:19:17.933000
| 2
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.any.html
|
pandas.DataFrame.any#
pandas.DataFrame.any#
You should use applymap instead because you want the operation to be performed on each element in your dataframe, not each column, which is what apply does.
df = pd.DataFrame({"A": [0.1, 0.2, 0.5, 0.6, 0.7],
"B": [0.75, 0.85, 0.2, 0.9, 0.0],
"C": [0.2, 0.51, 0.49, 0.3, 0.1]})
print(df)
A B C
0 0.1 0.75 0.20
1 0.2 0.85 0.51
2 0.5 0.20 0.49
3 0.6 0.90 0.30
4 0.7 0.00 0.10
df_new = df.applymap(lambda x: 1 if x > 0.5 else 0)
print(df_new)
A B C
0 0 1 0
1 0 1 1
2 0 0 0
3 1 1 0
4 1 0 0
DataFrame.any(*, axis=0, bool_only=None, skipna=True, level=None, **kwargs)[source]#
Return whether any element is True, potentially over an axis.
Returns False unless there is at least one element within a series or
along a Dataframe axis that is True or equivalent (e.g. non-zero or
non-empty).
Parameters
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Indicate which axis or axes should be reduced. For Series this parameter
is unused and defaults to 0.
0 / ‘index’ : reduce the index, return a Series whose index is the
original column labels.
1 / ‘columns’ : reduce the columns, return a Series whose index is the
original index.
None : reduce all axes, return a scalar.
bool_onlybool, default NoneInclude only boolean columns. If None, will attempt to use everything,
then use only boolean data. Not implemented for Series.
skipnabool, default TrueExclude NA/null values. If the entire row/column is NA and skipna is
True, then the result will be False, as for an empty row/column.
If skipna is False, then NA are treated as True, because these are not
equal to zero.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
**kwargsany, default NoneAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
Series or DataFrameIf level is specified, then, DataFrame is returned; otherwise, Series
is returned.
See also
numpy.anyNumpy version of this method.
Series.anyReturn whether any element is True.
Series.allReturn whether all elements are True.
DataFrame.anyReturn whether any element is True over requested axis.
DataFrame.allReturn whether all elements are True over requested axis.
Examples
Series
For Series input, the output is a scalar indicating whether any element
is True.
>>> pd.Series([False, False]).any()
False
>>> pd.Series([True, False]).any()
True
>>> pd.Series([], dtype="float64").any()
False
>>> pd.Series([np.nan]).any()
False
>>> pd.Series([np.nan]).any(skipna=False)
True
DataFrame
Whether each column contains at least one True element (the default).
>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})
>>> df
A B C
0 1 0 0
1 2 2 0
>>> df.any()
A True
B True
C False
dtype: bool
Aggregating over the columns.
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]})
>>> df
A B
0 True 1
1 False 2
>>> df.any(axis='columns')
0 True
1 True
dtype: bool
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]})
>>> df
A B
0 True 1
1 False 0
>>> df.any(axis='columns')
0 True
1 False
dtype: bool
Aggregating over the entire DataFrame with axis=None.
>>> df.any(axis=None)
True
any for an empty DataFrame is an empty Series.
>>> pd.DataFrame([]).any()
Series([], dtype: bool)
| 46
| 674
|
Apply an if/then condition over all elements of a Pandas dataframe
I need to apply a very simple if/then function to every element in a Pandas Dataframe.
If the value of any element is over 0.5, I need to return a 1. Otherwise, I need to return a 0.
This seemed really simple with a lambda function, but every time I try I get an error: 'ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()'
So far I have:
df_new = df.apply(lambda x: 1 if x > 0.5 else 1)
I'd be grateful for any help.
|
61,817,896
|
Fill panda columns with conditions
|
<p>I'm trying to fill a column C with conditions: if the value of column B is None, then fill column C with the value of column A. If column B is not None, then fill column C with the value 3</p>
<p><strong>I tried:</strong></p>
<pre><code>import pandas
df = pandas.DataFrame([{'A': 5, 'B': None, 'C': ''},
{'A': 2, 'B': "foo", 'C': ''},
{'A': 6, 'B': "foo", 'C': ''},
{'A': 1, 'B': None, 'C': ''}])
df["C"] = df["B"].apply(lambda x: 3 if (x != None) else df["A"])
</code></pre>
<p><strong>My output:</strong> </p>
<blockquote>
<p>TypeError: object of type 'int' has no len()</p>
</blockquote>
<p>I know the problem is df["A"], but I don't know how to solve it</p>
<p><strong>Good output:</strong></p>
<pre><code>df = pandas.DataFrame([{'A': 5, 'B': None, 'C': 5},
{'A': 2, 'B': "foo", 'C': 3},
{'A': 6, 'B': "foo", 'C': 3},
{'A': 1, 'B': None, 'C': 1}])
</code></pre>
| 61,817,930
| 2020-05-15T11:14:46.713000
| 1
| null | 1
| 32
|
python|pandas
|
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.select.html" rel="nofollow noreferrer"><code>numpy.where</code></a> with test <code>None</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isna.html" rel="nofollow noreferrer"><code>Series.isna</code></a>:</p>
<pre><code>df["C"] = np.where(df["B"].isna(), df['A'], 3)
#alternative
#df["C"] = df['A'].where(df["B"].isna(), 3)
print (df)
A B C
0 5 None 5
1 2 foo 3
2 6 foo 3
3 1 None 1
</code></pre>
| 2020-05-15T11:16:48.710000
| 3
|
https://pandas.pydata.org/docs/user_guide/missing_data.html
|
Working with missing data#
Working with missing data#
In this section, we will discuss missing (also referred to as NA) values in
pandas.
Note
The choice of using NaN internally to denote missing data was largely
for simplicity and performance reasons.
Starting from pandas 1.0, some optional data types start experimenting
with a native NA scalar using a mask-based approach. See
here for more.
See the cookbook for some advanced strategies.
Values considered “missing”#
As data comes in many shapes and forms, pandas aims to be flexible with regard
to handling missing data. While NaN is the default missing value marker for
Use numpy.where with test None by Series.isna:
df["C"] = np.where(df["B"].isna(), df['A'], 3)
#alternative
#df["C"] = df['A'].where(df["B"].isna(), 3)
print (df)
A B C
0 5 None 5
1 2 foo 3
2 6 foo 3
3 1 None 1
reasons of computational speed and convenience, we need to be able to easily
detect this value with data of different types: floating point, integer,
boolean, and general object. In many cases, however, the Python None will
arise and we wish to also consider that “missing” or “not available” or “NA”.
Note
If you want to consider inf and -inf to be “NA” in computations,
you can set pandas.options.mode.use_inf_as_na = True.
In [1]: df = pd.DataFrame(
...: np.random.randn(5, 3),
...: index=["a", "c", "e", "f", "h"],
...: columns=["one", "two", "three"],
...: )
...:
In [2]: df["four"] = "bar"
In [3]: df["five"] = df["one"] > 0
In [4]: df
Out[4]:
one two three four five
a 0.469112 -0.282863 -1.509059 bar True
c -1.135632 1.212112 -0.173215 bar False
e 0.119209 -1.044236 -0.861849 bar True
f -2.104569 -0.494929 1.071804 bar False
h 0.721555 -0.706771 -1.039575 bar True
In [5]: df2 = df.reindex(["a", "b", "c", "d", "e", "f", "g", "h"])
In [6]: df2
Out[6]:
one two three four five
a 0.469112 -0.282863 -1.509059 bar True
b NaN NaN NaN NaN NaN
c -1.135632 1.212112 -0.173215 bar False
d NaN NaN NaN NaN NaN
e 0.119209 -1.044236 -0.861849 bar True
f -2.104569 -0.494929 1.071804 bar False
g NaN NaN NaN NaN NaN
h 0.721555 -0.706771 -1.039575 bar True
To make detecting missing values easier (and across different array dtypes),
pandas provides the isna() and
notna() functions, which are also methods on
Series and DataFrame objects:
In [7]: df2["one"]
Out[7]:
a 0.469112
b NaN
c -1.135632
d NaN
e 0.119209
f -2.104569
g NaN
h 0.721555
Name: one, dtype: float64
In [8]: pd.isna(df2["one"])
Out[8]:
a False
b True
c False
d True
e False
f False
g True
h False
Name: one, dtype: bool
In [9]: df2["four"].notna()
Out[9]:
a True
b False
c True
d False
e True
f True
g False
h True
Name: four, dtype: bool
In [10]: df2.isna()
Out[10]:
one two three four five
a False False False False False
b True True True True True
c False False False False False
d True True True True True
e False False False False False
f False False False False False
g True True True True True
h False False False False False
Warning
One has to be mindful that in Python (and NumPy), the nan's don’t compare equal, but None's do.
Note that pandas/NumPy uses the fact that np.nan != np.nan, and treats None like np.nan.
In [11]: None == None # noqa: E711
Out[11]: True
In [12]: np.nan == np.nan
Out[12]: False
So as compared to above, a scalar equality comparison versus a None/np.nan doesn’t provide useful information.
In [13]: df2["one"] == np.nan
Out[13]:
a False
b False
c False
d False
e False
f False
g False
h False
Name: one, dtype: bool
Integer dtypes and missing data#
Because NaN is a float, a column of integers with even one missing values
is cast to floating-point dtype (see Support for integer NA for more). pandas
provides a nullable integer array, which can be used by explicitly requesting
the dtype:
In [14]: pd.Series([1, 2, np.nan, 4], dtype=pd.Int64Dtype())
Out[14]:
0 1
1 2
2 <NA>
3 4
dtype: Int64
Alternatively, the string alias dtype='Int64' (note the capital "I") can be
used.
See Nullable integer data type for more.
Datetimes#
For datetime64[ns] types, NaT represents missing values. This is a pseudo-native
sentinel value that can be represented by NumPy in a singular dtype (datetime64[ns]).
pandas objects provide compatibility between NaT and NaN.
In [15]: df2 = df.copy()
In [16]: df2["timestamp"] = pd.Timestamp("20120101")
In [17]: df2
Out[17]:
one two three four five timestamp
a 0.469112 -0.282863 -1.509059 bar True 2012-01-01
c -1.135632 1.212112 -0.173215 bar False 2012-01-01
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h 0.721555 -0.706771 -1.039575 bar True 2012-01-01
In [18]: df2.loc[["a", "c", "h"], ["one", "timestamp"]] = np.nan
In [19]: df2
Out[19]:
one two three four five timestamp
a NaN -0.282863 -1.509059 bar True NaT
c NaN 1.212112 -0.173215 bar False NaT
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h NaN -0.706771 -1.039575 bar True NaT
In [20]: df2.dtypes.value_counts()
Out[20]:
float64 3
object 1
bool 1
datetime64[ns] 1
dtype: int64
Inserting missing data#
You can insert missing values by simply assigning to containers. The
actual missing value used will be chosen based on the dtype.
For example, numeric containers will always use NaN regardless of
the missing value type chosen:
In [21]: s = pd.Series([1, 2, 3])
In [22]: s.loc[0] = None
In [23]: s
Out[23]:
0 NaN
1 2.0
2 3.0
dtype: float64
Likewise, datetime containers will always use NaT.
For object containers, pandas will use the value given:
In [24]: s = pd.Series(["a", "b", "c"])
In [25]: s.loc[0] = None
In [26]: s.loc[1] = np.nan
In [27]: s
Out[27]:
0 None
1 NaN
2 c
dtype: object
Calculations with missing data#
Missing values propagate naturally through arithmetic operations between pandas
objects.
In [28]: a
Out[28]:
one two
a NaN -0.282863
c NaN 1.212112
e 0.119209 -1.044236
f -2.104569 -0.494929
h -2.104569 -0.706771
In [29]: b
Out[29]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [30]: a + b
Out[30]:
one three two
a NaN NaN -0.565727
c NaN NaN 2.424224
e 0.238417 NaN -2.088472
f -4.209138 NaN -0.989859
h NaN NaN -1.413542
The descriptive statistics and computational methods discussed in the
data structure overview (and listed here and here) are all written to
account for missing data. For example:
When summing data, NA (missing) values will be treated as zero.
If the data are all NA, the result will be 0.
Cumulative methods like cumsum() and cumprod() ignore NA values by default, but preserve them in the resulting arrays. To override this behaviour and include NA values, use skipna=False.
In [31]: df
Out[31]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [32]: df["one"].sum()
Out[32]: -1.9853605075978744
In [33]: df.mean(1)
Out[33]:
a -0.895961
c 0.519449
e -0.595625
f -0.509232
h -0.873173
dtype: float64
In [34]: df.cumsum()
Out[34]:
one two three
a NaN -0.282863 -1.509059
c NaN 0.929249 -1.682273
e 0.119209 -0.114987 -2.544122
f -1.985361 -0.609917 -1.472318
h NaN -1.316688 -2.511893
In [35]: df.cumsum(skipna=False)
Out[35]:
one two three
a NaN -0.282863 -1.509059
c NaN 0.929249 -1.682273
e NaN -0.114987 -2.544122
f NaN -0.609917 -1.472318
h NaN -1.316688 -2.511893
Sum/prod of empties/nans#
Warning
This behavior is now standard as of v0.22.0 and is consistent with the default in numpy; previously sum/prod of all-NA or empty Series/DataFrames would return NaN.
See v0.22.0 whatsnew for more.
The sum of an empty or all-NA Series or column of a DataFrame is 0.
In [36]: pd.Series([np.nan]).sum()
Out[36]: 0.0
In [37]: pd.Series([], dtype="float64").sum()
Out[37]: 0.0
The product of an empty or all-NA Series or column of a DataFrame is 1.
In [38]: pd.Series([np.nan]).prod()
Out[38]: 1.0
In [39]: pd.Series([], dtype="float64").prod()
Out[39]: 1.0
NA values in GroupBy#
NA groups in GroupBy are automatically excluded. This behavior is consistent
with R, for example:
In [40]: df
Out[40]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [41]: df.groupby("one").mean()
Out[41]:
two three
one
-2.104569 -0.494929 1.071804
0.119209 -1.044236 -0.861849
See the groupby section here for more information.
Cleaning / filling missing data#
pandas objects are equipped with various data manipulation methods for dealing
with missing data.
Filling missing values: fillna#
fillna() can “fill in” NA values with non-NA data in a couple
of ways, which we illustrate:
Replace NA with a scalar value
In [42]: df2
Out[42]:
one two three four five timestamp
a NaN -0.282863 -1.509059 bar True NaT
c NaN 1.212112 -0.173215 bar False NaT
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h NaN -0.706771 -1.039575 bar True NaT
In [43]: df2.fillna(0)
Out[43]:
one two three four five timestamp
a 0.000000 -0.282863 -1.509059 bar True 0
c 0.000000 1.212112 -0.173215 bar False 0
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01 00:00:00
f -2.104569 -0.494929 1.071804 bar False 2012-01-01 00:00:00
h 0.000000 -0.706771 -1.039575 bar True 0
In [44]: df2["one"].fillna("missing")
Out[44]:
a missing
c missing
e 0.119209
f -2.104569
h missing
Name: one, dtype: object
Fill gaps forward or backward
Using the same filling arguments as reindexing, we
can propagate non-NA values forward or backward:
In [45]: df
Out[45]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [46]: df.fillna(method="pad")
Out[46]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h -2.104569 -0.706771 -1.039575
Limit the amount of filling
If we only want consecutive gaps filled up to a certain number of data points,
we can use the limit keyword:
In [47]: df
Out[47]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN NaN NaN
f NaN NaN NaN
h NaN -0.706771 -1.039575
In [48]: df.fillna(method="pad", limit=1)
Out[48]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN 1.212112 -0.173215
f NaN NaN NaN
h NaN -0.706771 -1.039575
To remind you, these are the available filling methods:
Method
Action
pad / ffill
Fill values forward
bfill / backfill
Fill values backward
With time series data, using pad/ffill is extremely common so that the “last
known value” is available at every time point.
ffill() is equivalent to fillna(method='ffill')
and bfill() is equivalent to fillna(method='bfill')
Filling with a PandasObject#
You can also fillna using a dict or Series that is alignable. The labels of the dict or index of the Series
must match the columns of the frame you wish to fill. The
use case of this is to fill a DataFrame with the mean of that column.
In [49]: dff = pd.DataFrame(np.random.randn(10, 3), columns=list("ABC"))
In [50]: dff.iloc[3:5, 0] = np.nan
In [51]: dff.iloc[4:6, 1] = np.nan
In [52]: dff.iloc[5:8, 2] = np.nan
In [53]: dff
Out[53]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 NaN 0.577046 -1.715002
4 NaN NaN -1.157892
5 -1.344312 NaN NaN
6 -0.109050 1.643563 NaN
7 0.357021 -0.674600 NaN
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
In [54]: dff.fillna(dff.mean())
Out[54]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 -0.140857 0.577046 -1.715002
4 -0.140857 -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
In [55]: dff.fillna(dff.mean()["B":"C"])
Out[55]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 NaN 0.577046 -1.715002
4 NaN -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
Same result as above, but is aligning the ‘fill’ value which is
a Series in this case.
In [56]: dff.where(pd.notna(dff), dff.mean(), axis="columns")
Out[56]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 -0.140857 0.577046 -1.715002
4 -0.140857 -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
Dropping axis labels with missing data: dropna#
You may wish to simply exclude labels from a data set which refer to missing
data. To do this, use dropna():
In [57]: df
Out[57]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN 0.000000 0.000000
f NaN 0.000000 0.000000
h NaN -0.706771 -1.039575
In [58]: df.dropna(axis=0)
Out[58]:
Empty DataFrame
Columns: [one, two, three]
Index: []
In [59]: df.dropna(axis=1)
Out[59]:
two three
a -0.282863 -1.509059
c 1.212112 -0.173215
e 0.000000 0.000000
f 0.000000 0.000000
h -0.706771 -1.039575
In [60]: df["one"].dropna()
Out[60]: Series([], Name: one, dtype: float64)
An equivalent dropna() is available for Series.
DataFrame.dropna has considerably more options than Series.dropna, which can be
examined in the API.
Interpolation#
Both Series and DataFrame objects have interpolate()
that, by default, performs linear interpolation at missing data points.
In [61]: ts
Out[61]:
2000-01-31 0.469112
2000-02-29 NaN
2000-03-31 NaN
2000-04-28 NaN
2000-05-31 NaN
...
2007-12-31 -6.950267
2008-01-31 -7.904475
2008-02-29 -6.441779
2008-03-31 -8.184940
2008-04-30 -9.011531
Freq: BM, Length: 100, dtype: float64
In [62]: ts.count()
Out[62]: 66
In [63]: ts.plot()
Out[63]: <AxesSubplot: >
In [64]: ts.interpolate()
Out[64]:
2000-01-31 0.469112
2000-02-29 0.434469
2000-03-31 0.399826
2000-04-28 0.365184
2000-05-31 0.330541
...
2007-12-31 -6.950267
2008-01-31 -7.904475
2008-02-29 -6.441779
2008-03-31 -8.184940
2008-04-30 -9.011531
Freq: BM, Length: 100, dtype: float64
In [65]: ts.interpolate().count()
Out[65]: 100
In [66]: ts.interpolate().plot()
Out[66]: <AxesSubplot: >
Index aware interpolation is available via the method keyword:
In [67]: ts2
Out[67]:
2000-01-31 0.469112
2000-02-29 NaN
2002-07-31 -5.785037
2005-01-31 NaN
2008-04-30 -9.011531
dtype: float64
In [68]: ts2.interpolate()
Out[68]:
2000-01-31 0.469112
2000-02-29 -2.657962
2002-07-31 -5.785037
2005-01-31 -7.398284
2008-04-30 -9.011531
dtype: float64
In [69]: ts2.interpolate(method="time")
Out[69]:
2000-01-31 0.469112
2000-02-29 0.270241
2002-07-31 -5.785037
2005-01-31 -7.190866
2008-04-30 -9.011531
dtype: float64
For a floating-point index, use method='values':
In [70]: ser
Out[70]:
0.0 0.0
1.0 NaN
10.0 10.0
dtype: float64
In [71]: ser.interpolate()
Out[71]:
0.0 0.0
1.0 5.0
10.0 10.0
dtype: float64
In [72]: ser.interpolate(method="values")
Out[72]:
0.0 0.0
1.0 1.0
10.0 10.0
dtype: float64
You can also interpolate with a DataFrame:
In [73]: df = pd.DataFrame(
....: {
....: "A": [1, 2.1, np.nan, 4.7, 5.6, 6.8],
....: "B": [0.25, np.nan, np.nan, 4, 12.2, 14.4],
....: }
....: )
....:
In [74]: df
Out[74]:
A B
0 1.0 0.25
1 2.1 NaN
2 NaN NaN
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
In [75]: df.interpolate()
Out[75]:
A B
0 1.0 0.25
1 2.1 1.50
2 3.4 2.75
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
The method argument gives access to fancier interpolation methods.
If you have scipy installed, you can pass the name of a 1-d interpolation routine to method.
You’ll want to consult the full scipy interpolation documentation and reference guide for details.
The appropriate interpolation method will depend on the type of data you are working with.
If you are dealing with a time series that is growing at an increasing rate,
method='quadratic' may be appropriate.
If you have values approximating a cumulative distribution function,
then method='pchip' should work well.
To fill missing values with goal of smooth plotting, consider method='akima'.
Warning
These methods require scipy.
In [76]: df.interpolate(method="barycentric")
Out[76]:
A B
0 1.00 0.250
1 2.10 -7.660
2 3.53 -4.515
3 4.70 4.000
4 5.60 12.200
5 6.80 14.400
In [77]: df.interpolate(method="pchip")
Out[77]:
A B
0 1.00000 0.250000
1 2.10000 0.672808
2 3.43454 1.928950
3 4.70000 4.000000
4 5.60000 12.200000
5 6.80000 14.400000
In [78]: df.interpolate(method="akima")
Out[78]:
A B
0 1.000000 0.250000
1 2.100000 -0.873316
2 3.406667 0.320034
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
When interpolating via a polynomial or spline approximation, you must also specify
the degree or order of the approximation:
In [79]: df.interpolate(method="spline", order=2)
Out[79]:
A B
0 1.000000 0.250000
1 2.100000 -0.428598
2 3.404545 1.206900
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
In [80]: df.interpolate(method="polynomial", order=2)
Out[80]:
A B
0 1.000000 0.250000
1 2.100000 -2.703846
2 3.451351 -1.453846
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
Compare several methods:
In [81]: np.random.seed(2)
In [82]: ser = pd.Series(np.arange(1, 10.1, 0.25) ** 2 + np.random.randn(37))
In [83]: missing = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29])
In [84]: ser[missing] = np.nan
In [85]: methods = ["linear", "quadratic", "cubic"]
In [86]: df = pd.DataFrame({m: ser.interpolate(method=m) for m in methods})
In [87]: df.plot()
Out[87]: <AxesSubplot: >
Another use case is interpolation at new values.
Suppose you have 100 observations from some distribution. And let’s suppose
that you’re particularly interested in what’s happening around the middle.
You can mix pandas’ reindex and interpolate methods to interpolate
at the new values.
In [88]: ser = pd.Series(np.sort(np.random.uniform(size=100)))
# interpolate at new_index
In [89]: new_index = ser.index.union(pd.Index([49.25, 49.5, 49.75, 50.25, 50.5, 50.75]))
In [90]: interp_s = ser.reindex(new_index).interpolate(method="pchip")
In [91]: interp_s[49:51]
Out[91]:
49.00 0.471410
49.25 0.476841
49.50 0.481780
49.75 0.485998
50.00 0.489266
50.25 0.491814
50.50 0.493995
50.75 0.495763
51.00 0.497074
dtype: float64
Interpolation limits#
Like other pandas fill methods, interpolate() accepts a limit keyword
argument. Use this argument to limit the number of consecutive NaN values
filled since the last valid observation:
In [92]: ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan, np.nan, 13, np.nan, np.nan])
In [93]: ser
Out[93]:
0 NaN
1 NaN
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive values in a forward direction
In [94]: ser.interpolate()
Out[94]:
0 NaN
1 NaN
2 5.0
3 7.0
4 9.0
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64
# fill one consecutive value in a forward direction
In [95]: ser.interpolate(limit=1)
Out[95]:
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 NaN
6 13.0
7 13.0
8 NaN
dtype: float64
By default, NaN values are filled in a forward direction. Use
limit_direction parameter to fill backward or from both directions.
# fill one consecutive value backwards
In [96]: ser.interpolate(limit=1, limit_direction="backward")
Out[96]:
0 NaN
1 5.0
2 5.0
3 NaN
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64
# fill one consecutive value in both directions
In [97]: ser.interpolate(limit=1, limit_direction="both")
Out[97]:
0 NaN
1 5.0
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
7 13.0
8 NaN
dtype: float64
# fill all consecutive values in both directions
In [98]: ser.interpolate(limit_direction="both")
Out[98]:
0 5.0
1 5.0
2 5.0
3 7.0
4 9.0
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64
By default, NaN values are filled whether they are inside (surrounded by)
existing valid values, or outside existing valid values. The limit_area
parameter restricts filling to either inside or outside values.
# fill one consecutive inside value in both directions
In [99]: ser.interpolate(limit_direction="both", limit_area="inside", limit=1)
Out[99]:
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive outside values backward
In [100]: ser.interpolate(limit_direction="backward", limit_area="outside")
Out[100]:
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive outside values in both directions
In [101]: ser.interpolate(limit_direction="both", limit_area="outside")
Out[101]:
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 13.0
8 13.0
dtype: float64
Replacing generic values#
Often times we want to replace arbitrary values with other values.
replace() in Series and replace() in DataFrame provides an efficient yet
flexible way to perform such replacements.
For a Series, you can replace a single value or a list of values by another
value:
In [102]: ser = pd.Series([0.0, 1.0, 2.0, 3.0, 4.0])
In [103]: ser.replace(0, 5)
Out[103]:
0 5.0
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
You can replace a list of values by a list of other values:
In [104]: ser.replace([0, 1, 2, 3, 4], [4, 3, 2, 1, 0])
Out[104]:
0 4.0
1 3.0
2 2.0
3 1.0
4 0.0
dtype: float64
You can also specify a mapping dict:
In [105]: ser.replace({0: 10, 1: 100})
Out[105]:
0 10.0
1 100.0
2 2.0
3 3.0
4 4.0
dtype: float64
For a DataFrame, you can specify individual values by column:
In [106]: df = pd.DataFrame({"a": [0, 1, 2, 3, 4], "b": [5, 6, 7, 8, 9]})
In [107]: df.replace({"a": 0, "b": 5}, 100)
Out[107]:
a b
0 100 100
1 1 6
2 2 7
3 3 8
4 4 9
Instead of replacing with specified values, you can treat all given values as
missing and interpolate over them:
In [108]: ser.replace([1, 2, 3], method="pad")
Out[108]:
0 0.0
1 0.0
2 0.0
3 0.0
4 4.0
dtype: float64
String/regular expression replacement#
Note
Python strings prefixed with the r character such as r'hello world'
are so-called “raw” strings. They have different semantics regarding
backslashes than strings without this prefix. Backslashes in raw strings
will be interpreted as an escaped backslash, e.g., r'\' == '\\'. You
should read about them
if this is unclear.
Replace the ‘.’ with NaN (str -> str):
In [109]: d = {"a": list(range(4)), "b": list("ab.."), "c": ["a", "b", np.nan, "d"]}
In [110]: df = pd.DataFrame(d)
In [111]: df.replace(".", np.nan)
Out[111]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Now do it with a regular expression that removes surrounding whitespace
(regex -> regex):
In [112]: df.replace(r"\s*\.\s*", np.nan, regex=True)
Out[112]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Replace a few different values (list -> list):
In [113]: df.replace(["a", "."], ["b", np.nan])
Out[113]:
a b c
0 0 b b
1 1 b b
2 2 NaN NaN
3 3 NaN d
list of regex -> list of regex:
In [114]: df.replace([r"\.", r"(a)"], ["dot", r"\1stuff"], regex=True)
Out[114]:
a b c
0 0 astuff astuff
1 1 b b
2 2 dot NaN
3 3 dot d
Only search in column 'b' (dict -> dict):
In [115]: df.replace({"b": "."}, {"b": np.nan})
Out[115]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Same as the previous example, but use a regular expression for
searching instead (dict of regex -> dict):
In [116]: df.replace({"b": r"\s*\.\s*"}, {"b": np.nan}, regex=True)
Out[116]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
You can pass nested dictionaries of regular expressions that use regex=True:
In [117]: df.replace({"b": {"b": r""}}, regex=True)
Out[117]:
a b c
0 0 a a
1 1 b
2 2 . NaN
3 3 . d
Alternatively, you can pass the nested dictionary like so:
In [118]: df.replace(regex={"b": {r"\s*\.\s*": np.nan}})
Out[118]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
You can also use the group of a regular expression match when replacing (dict
of regex -> dict of regex), this works for lists as well.
In [119]: df.replace({"b": r"\s*(\.)\s*"}, {"b": r"\1ty"}, regex=True)
Out[119]:
a b c
0 0 a a
1 1 b b
2 2 .ty NaN
3 3 .ty d
You can pass a list of regular expressions, of which those that match
will be replaced with a scalar (list of regex -> regex).
In [120]: df.replace([r"\s*\.\s*", r"a|b"], np.nan, regex=True)
Out[120]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
All of the regular expression examples can also be passed with the
to_replace argument as the regex argument. In this case the value
argument must be passed explicitly by name or regex must be a nested
dictionary. The previous example, in this case, would then be:
In [121]: df.replace(regex=[r"\s*\.\s*", r"a|b"], value=np.nan)
Out[121]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
This can be convenient if you do not want to pass regex=True every time you
want to use a regular expression.
Note
Anywhere in the above replace examples that you see a regular expression
a compiled regular expression is valid as well.
Numeric replacement#
replace() is similar to fillna().
In [122]: df = pd.DataFrame(np.random.randn(10, 2))
In [123]: df[np.random.rand(df.shape[0]) > 0.5] = 1.5
In [124]: df.replace(1.5, np.nan)
Out[124]:
0 1
0 -0.844214 -1.021415
1 0.432396 -0.323580
2 0.423825 0.799180
3 1.262614 0.751965
4 NaN NaN
5 NaN NaN
6 -0.498174 -1.060799
7 0.591667 -0.183257
8 1.019855 -1.482465
9 NaN NaN
Replacing more than one value is possible by passing a list.
In [125]: df00 = df.iloc[0, 0]
In [126]: df.replace([1.5, df00], [np.nan, "a"])
Out[126]:
0 1
0 a -1.021415
1 0.432396 -0.323580
2 0.423825 0.799180
3 1.262614 0.751965
4 NaN NaN
5 NaN NaN
6 -0.498174 -1.060799
7 0.591667 -0.183257
8 1.019855 -1.482465
9 NaN NaN
In [127]: df[1].dtype
Out[127]: dtype('float64')
You can also operate on the DataFrame in place:
In [128]: df.replace(1.5, np.nan, inplace=True)
Missing data casting rules and indexing#
While pandas supports storing arrays of integer and boolean type, these types
are not capable of storing missing data. Until we can switch to using a native
NA type in NumPy, we’ve established some “casting rules”. When a reindexing
operation introduces missing data, the Series will be cast according to the
rules introduced in the table below.
data type
Cast to
integer
float
boolean
object
float
no cast
object
no cast
For example:
In [129]: s = pd.Series(np.random.randn(5), index=[0, 2, 4, 6, 7])
In [130]: s > 0
Out[130]:
0 True
2 True
4 True
6 True
7 True
dtype: bool
In [131]: (s > 0).dtype
Out[131]: dtype('bool')
In [132]: crit = (s > 0).reindex(list(range(8)))
In [133]: crit
Out[133]:
0 True
1 NaN
2 True
3 NaN
4 True
5 NaN
6 True
7 True
dtype: object
In [134]: crit.dtype
Out[134]: dtype('O')
Ordinarily NumPy will complain if you try to use an object array (even if it
contains boolean values) instead of a boolean array to get or set values from
an ndarray (e.g. selecting values based on some criteria). If a boolean vector
contains NAs, an exception will be generated:
In [135]: reindexed = s.reindex(list(range(8))).fillna(0)
In [136]: reindexed[crit]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[136], line 1
----> 1 reindexed[crit]
File ~/work/pandas/pandas/pandas/core/series.py:1002, in Series.__getitem__(self, key)
999 if is_iterator(key):
1000 key = list(key)
-> 1002 if com.is_bool_indexer(key):
1003 key = check_bool_indexer(self.index, key)
1004 key = np.asarray(key, dtype=bool)
File ~/work/pandas/pandas/pandas/core/common.py:135, in is_bool_indexer(key)
131 na_msg = "Cannot mask with non-boolean array containing NA / NaN values"
132 if lib.infer_dtype(key_array) == "boolean" and isna(key_array).any():
133 # Don't raise on e.g. ["A", "B", np.nan], see
134 # test_loc_getitem_list_of_labels_categoricalindex_with_na
--> 135 raise ValueError(na_msg)
136 return False
137 return True
ValueError: Cannot mask with non-boolean array containing NA / NaN values
However, these can be filled in using fillna() and it will work fine:
In [137]: reindexed[crit.fillna(False)]
Out[137]:
0 0.126504
2 0.696198
4 0.697416
6 0.601516
7 0.003659
dtype: float64
In [138]: reindexed[crit.fillna(True)]
Out[138]:
0 0.126504
1 0.000000
2 0.696198
3 0.000000
4 0.697416
5 0.000000
6 0.601516
7 0.003659
dtype: float64
pandas provides a nullable integer dtype, but you must explicitly request it
when creating the series or column. Notice that we use a capital “I” in
the dtype="Int64".
In [139]: s = pd.Series([0, 1, np.nan, 3, 4], dtype="Int64")
In [140]: s
Out[140]:
0 0
1 1
2 <NA>
3 3
4 4
dtype: Int64
See Nullable integer data type for more.
Experimental NA scalar to denote missing values#
Warning
Experimental: the behaviour of pd.NA can still change without warning.
New in version 1.0.0.
Starting from pandas 1.0, an experimental pd.NA value (singleton) is
available to represent scalar missing values. At this moment, it is used in
the nullable integer, boolean and
dedicated string data types as the missing value indicator.
The goal of pd.NA is provide a “missing” indicator that can be used
consistently across data types (instead of np.nan, None or pd.NaT
depending on the data type).
For example, when having missing values in a Series with the nullable integer
dtype, it will use pd.NA:
In [141]: s = pd.Series([1, 2, None], dtype="Int64")
In [142]: s
Out[142]:
0 1
1 2
2 <NA>
dtype: Int64
In [143]: s[2]
Out[143]: <NA>
In [144]: s[2] is pd.NA
Out[144]: True
Currently, pandas does not yet use those data types by default (when creating
a DataFrame or Series, or when reading in data), so you need to specify
the dtype explicitly. An easy way to convert to those dtypes is explained
here.
Propagation in arithmetic and comparison operations#
In general, missing values propagate in operations involving pd.NA. When
one of the operands is unknown, the outcome of the operation is also unknown.
For example, pd.NA propagates in arithmetic operations, similarly to
np.nan:
In [145]: pd.NA + 1
Out[145]: <NA>
In [146]: "a" * pd.NA
Out[146]: <NA>
There are a few special cases when the result is known, even when one of the
operands is NA.
In [147]: pd.NA ** 0
Out[147]: 1
In [148]: 1 ** pd.NA
Out[148]: 1
In equality and comparison operations, pd.NA also propagates. This deviates
from the behaviour of np.nan, where comparisons with np.nan always
return False.
In [149]: pd.NA == 1
Out[149]: <NA>
In [150]: pd.NA == pd.NA
Out[150]: <NA>
In [151]: pd.NA < 2.5
Out[151]: <NA>
To check if a value is equal to pd.NA, the isna() function can be
used:
In [152]: pd.isna(pd.NA)
Out[152]: True
An exception on this basic propagation rule are reductions (such as the
mean or the minimum), where pandas defaults to skipping missing values. See
above for more.
Logical operations#
For logical operations, pd.NA follows the rules of the
three-valued logic (or
Kleene logic, similarly to R, SQL and Julia). This logic means to only
propagate missing values when it is logically required.
For example, for the logical “or” operation (|), if one of the operands
is True, we already know the result will be True, regardless of the
other value (so regardless the missing value would be True or False).
In this case, pd.NA does not propagate:
In [153]: True | False
Out[153]: True
In [154]: True | pd.NA
Out[154]: True
In [155]: pd.NA | True
Out[155]: True
On the other hand, if one of the operands is False, the result depends
on the value of the other operand. Therefore, in this case pd.NA
propagates:
In [156]: False | True
Out[156]: True
In [157]: False | False
Out[157]: False
In [158]: False | pd.NA
Out[158]: <NA>
The behaviour of the logical “and” operation (&) can be derived using
similar logic (where now pd.NA will not propagate if one of the operands
is already False):
In [159]: False & True
Out[159]: False
In [160]: False & False
Out[160]: False
In [161]: False & pd.NA
Out[161]: False
In [162]: True & True
Out[162]: True
In [163]: True & False
Out[163]: False
In [164]: True & pd.NA
Out[164]: <NA>
NA in a boolean context#
Since the actual value of an NA is unknown, it is ambiguous to convert NA
to a boolean value. The following raises an error:
In [165]: bool(pd.NA)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[165], line 1
----> 1 bool(pd.NA)
File ~/work/pandas/pandas/pandas/_libs/missing.pyx:382, in pandas._libs.missing.NAType.__bool__()
TypeError: boolean value of NA is ambiguous
This also means that pd.NA cannot be used in a context where it is
evaluated to a boolean, such as if condition: ... where condition can
potentially be pd.NA. In such cases, isna() can be used to check
for pd.NA or condition being pd.NA can be avoided, for example by
filling missing values beforehand.
A similar situation occurs when using Series or DataFrame objects in if
statements, see Using if/truth statements with pandas.
NumPy ufuncs#
pandas.NA implements NumPy’s __array_ufunc__ protocol. Most ufuncs
work with NA, and generally return NA:
In [166]: np.log(pd.NA)
Out[166]: <NA>
In [167]: np.add(pd.NA, 1)
Out[167]: <NA>
Warning
Currently, ufuncs involving an ndarray and NA will return an
object-dtype filled with NA values.
In [168]: a = np.array([1, 2, 3])
In [169]: np.greater(a, pd.NA)
Out[169]: array([<NA>, <NA>, <NA>], dtype=object)
The return type here may change to return a different array type
in the future.
See DataFrame interoperability with NumPy functions for more on ufuncs.
Conversion#
If you have a DataFrame or Series using traditional types that have missing data
represented using np.nan, there are convenience methods
convert_dtypes() in Series and convert_dtypes()
in DataFrame that can convert data to use the newer dtypes for integers, strings and
booleans listed here. This is especially helpful after reading
in data sets when letting the readers such as read_csv() and read_excel()
infer default dtypes.
In this example, while the dtypes of all columns are changed, we show the results for
the first 10 columns.
In [170]: bb = pd.read_csv("data/baseball.csv", index_col="id")
In [171]: bb[bb.columns[:10]].dtypes
Out[171]:
player object
year int64
stint int64
team object
lg object
g int64
ab int64
r int64
h int64
X2b int64
dtype: object
In [172]: bbn = bb.convert_dtypes()
In [173]: bbn[bbn.columns[:10]].dtypes
Out[173]:
player string
year Int64
stint Int64
team string
lg string
g Int64
ab Int64
r Int64
h Int64
X2b Int64
dtype: object
| 632
| 865
|
Fill panda columns with conditions
I'm trying to fill a column C with conditions: if the value of column B is None, then fill column C with the value of column A. If column B is not None, then fill column C with the value 3
I tried:
import pandas
df = pandas.DataFrame([{'A': 5, 'B': None, 'C': ''},
{'A': 2, 'B': "foo", 'C': ''},
{'A': 6, 'B': "foo", 'C': ''},
{'A': 1, 'B': None, 'C': ''}])
df["C"] = df["B"].apply(lambda x: 3 if (x != None) else df["A"])
My output:
TypeError: object of type 'int' has no len()
I know the problem is df["A"], but I don't know how to solve it
Good output:
df = pandas.DataFrame([{'A': 5, 'B': None, 'C': 5},
{'A': 2, 'B': "foo", 'C': 3},
{'A': 6, 'B': "foo", 'C': 3},
{'A': 1, 'B': None, 'C': 1}])
|
62,286,166
|
Merge DataFrame with many-to-many
|
<p>I have 2 DataFrames containing examples, I would like to see if a example of DataFrame 1 is present in DataFrame 2.</p>
<p>Normally I would aggregate the rows per example and simply merge the DataFrames. Unfortunately the merging has to be done with a "matching table" which has a many-to-many relationship between the keys (id_low vs. id_high).</p>
<p><strong>Simplified example</strong></p>
<p><em>Matching Table:</em></p>
<p><a href="https://i.stack.imgur.com/Vq13r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vq13r.png" alt="enter image description here"></a></p>
<p><em>Input DataFrames</em></p>
<p><a href="https://i.stack.imgur.com/EKthc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EKthc.png" alt="enter image description here"></a></p>
<p><em>They are therefore matchable like this:</em></p>
<p><a href="https://i.stack.imgur.com/5V8dk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5V8dk.png" alt="enter image description here"></a></p>
<p>Expected Output:</p>
<p><a href="https://i.stack.imgur.com/3j2Qu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3j2Qu.png" alt="enter image description here"></a></p>
<p><strong>Simplified example (for Python)</strong></p>
<pre><code>import pandas as pd
# Dataframe 1 - containing 1 Example
d1 = pd.DataFrame.from_dict({'Example': {0: 'Example 1', 1: 'Example 1', 2: 'Example 1'},
'id_low': {0: 1, 1: 2, 2: 3}})
# DataFrame 2 - containing 1 Example
d2 = pd.DataFrame.from_dict({'Example': {0: 'Example 2', 1: 'Example 2', 2: 'Example 2'},
'id_low': {0: 1, 1: 4, 2: 6}})
# DataFrame 3 - matching table
dm = pd.DataFrame.from_dict({'id_low': {0: 1, 1: 2, 2: 2, 3: 3, 4: 3, 5: 4, 6: 5, 7: 6, 8: 6},
'id_high': {0: 'A',
1: 'B',
2: 'C',
3: 'D',
4: 'E',
5: 'B',
6: 'B',
7: 'E',
8: 'F'}})
</code></pre>
<p>d1 and d2 are matchable as you can see above.</p>
<p><strong>Expected Output (or similar):</strong></p>
<pre><code>df_output = pd.DataFrame.from_dict({'Example': {0: 'Example 1'}, 'Example_2': {0: 'Example 2'}})
</code></pre>
<p><strong>Failed attemps</strong></p>
<p>Aggregation of with matching table translated values then merging. Considerer using Regex with the OR-Operator.</p>
| 62,287,953
| 2020-06-09T15:30:06.127000
| 1
| 1
| 1
| 2,337
|
python|pandas
|
<p>IIUC:</p>
<pre><code>d2.merge(dm)
.merge(d1.merge(dm), on='id_high')\
.groupby(['Example_x','Example_y'])['id_high'].agg(list)\
.reset_index()
</code></pre>
<p>Output:</p>
<pre><code> Example_x Example_y id_high
0 Example 2 Example 1 [A, B, E]
</code></pre>
| 2020-06-09T16:59:14.733000
| 3
|
https://pandas.pydata.org/docs/user_guide/merging.html
|
Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
IIUC:
d2.merge(dm)
.merge(d1.merge(dm), on='id_high')\
.groupby(['Example_x','Example_y'])['id_high'].agg(list)\
.reset_index()
Output:
Example_x Example_y id_high
0 Example 2 Example 1 [A, B, E]
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() (and therefore
append()) makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 153
| 366
|
Merge DataFrame with many-to-many
I have 2 DataFrames containing examples, I would like to see if a example of DataFrame 1 is present in DataFrame 2.
Normally I would aggregate the rows per example and simply merge the DataFrames. Unfortunately the merging has to be done with a "matching table" which has a many-to-many relationship between the keys (id_low vs. id_high).
Simplified example
Matching Table:
Input DataFrames
They are therefore matchable like this:
Expected Output:
Simplified example (for Python)
import pandas as pd
# Dataframe 1 - containing 1 Example
d1 = pd.DataFrame.from_dict({'Example': {0: 'Example 1', 1: 'Example 1', 2: 'Example 1'},
'id_low': {0: 1, 1: 2, 2: 3}})
# DataFrame 2 - containing 1 Example
d2 = pd.DataFrame.from_dict({'Example': {0: 'Example 2', 1: 'Example 2', 2: 'Example 2'},
'id_low': {0: 1, 1: 4, 2: 6}})
# DataFrame 3 - matching table
dm = pd.DataFrame.from_dict({'id_low': {0: 1, 1: 2, 2: 2, 3: 3, 4: 3, 5: 4, 6: 5, 7: 6, 8: 6},
'id_high': {0: 'A',
1: 'B',
2: 'C',
3: 'D',
4: 'E',
5: 'B',
6: 'B',
7: 'E',
8: 'F'}})
d1 and d2 are matchable as you can see above.
Expected Output (or similar):
df_output = pd.DataFrame.from_dict({'Example': {0: 'Example 1'}, 'Example_2': {0: 'Example 2'}})
Failed attemps
Aggregation of with matching table translated values then merging. Considerer using Regex with the OR-Operator.
|
62,973,678
|
How to get first non-zero entry across all columns and sum it row-by-row for a data-frame?
|
<hr />
<p>Say I have a table like this-</p>
<pre><code> User1 User2 User3 User4 User5 User6 User7 User8
w1 1 1 0 0 0 1 0 0
w2 0 1 0 0 1 1 0 0
w3 0 0 1 0 1 1 0 0
w4 1 1 1 1 0 0 0 0
w5 1 0 1 0 1 1 0 1
w6 1 1 1 1 1 1 1 1
</code></pre>
<p>I want an output table that finds the first 1 for each column and sums it for every week, so something like this-</p>
<p>(I basically want to find the number of first time users, by week)</p>
<pre><code> Column
w1 3 -ie User1, User2, User6
w2 1 -ie User5
w3 1 -ie User3
w4 1 -ie User4
w5 1 -ie User8
w6 1 -ie User7
</code></pre>
| 62,973,711
| 2020-07-18T20:51:58.420000
| 1
| null | 1
| 36
|
python|pandas
|
<p>Try <code>idxmax</code> and <code>value_counts</code></p>
<pre><code>s=df.idxmax().value_counts()
s
w1 3
w2 1
w6 1
w3 1
w4 1
w5 1
dtype: int64
</code></pre>
| 2020-07-18T20:55:48.960000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.any.html
|
pandas.DataFrame.any#
pandas.DataFrame.any#
DataFrame.any(*, axis=0, bool_only=None, skipna=True, level=None, **kwargs)[source]#
Return whether any element is True, potentially over an axis.
Returns False unless there is at least one element within a series or
along a Dataframe axis that is True or equivalent (e.g. non-zero or
non-empty).
Parameters
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Indicate which axis or axes should be reduced. For Series this parameter
is unused and defaults to 0.
0 / ‘index’ : reduce the index, return a Series whose index is the
original column labels.
1 / ‘columns’ : reduce the columns, return a Series whose index is the
original index.
None : reduce all axes, return a scalar.
bool_onlybool, default NoneInclude only boolean columns. If None, will attempt to use everything,
then use only boolean data. Not implemented for Series.
skipnabool, default TrueExclude NA/null values. If the entire row/column is NA and skipna is
True, then the result will be False, as for an empty row/column.
If skipna is False, then NA are treated as True, because these are not
equal to zero.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
Try idxmax and value_counts
s=df.idxmax().value_counts()
s
w1 3
w2 1
w6 1
w3 1
w4 1
w5 1
dtype: int64
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
**kwargsany, default NoneAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
Series or DataFrameIf level is specified, then, DataFrame is returned; otherwise, Series
is returned.
See also
numpy.anyNumpy version of this method.
Series.anyReturn whether any element is True.
Series.allReturn whether all elements are True.
DataFrame.anyReturn whether any element is True over requested axis.
DataFrame.allReturn whether all elements are True over requested axis.
Examples
Series
For Series input, the output is a scalar indicating whether any element
is True.
>>> pd.Series([False, False]).any()
False
>>> pd.Series([True, False]).any()
True
>>> pd.Series([], dtype="float64").any()
False
>>> pd.Series([np.nan]).any()
False
>>> pd.Series([np.nan]).any(skipna=False)
True
DataFrame
Whether each column contains at least one True element (the default).
>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})
>>> df
A B C
0 1 0 0
1 2 2 0
>>> df.any()
A True
B True
C False
dtype: bool
Aggregating over the columns.
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]})
>>> df
A B
0 True 1
1 False 2
>>> df.any(axis='columns')
0 True
1 True
dtype: bool
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]})
>>> df
A B
0 True 1
1 False 0
>>> df.any(axis='columns')
0 True
1 False
dtype: bool
Aggregating over the entire DataFrame with axis=None.
>>> df.any(axis=None)
True
any for an empty DataFrame is an empty Series.
>>> pd.DataFrame([]).any()
Series([], dtype: bool)
| 1,227
| 1,347
|
How to get first non-zero entry across all columns and sum it row-by-row for a data-frame?
Say I have a table like this-
User1 User2 User3 User4 User5 User6 User7 User8
w1 1 1 0 0 0 1 0 0
w2 0 1 0 0 1 1 0 0
w3 0 0 1 0 1 1 0 0
w4 1 1 1 1 0 0 0 0
w5 1 0 1 0 1 1 0 1
w6 1 1 1 1 1 1 1 1
I want an output table that finds the first 1 for each column and sums it for every week, so something like this-
(I basically want to find the number of first time users, by week)
Column
w1 3 -ie User1, User2, User6
w2 1 -ie User5
w3 1 -ie User3
w4 1 -ie User4
w5 1 -ie User8
w6 1 -ie User7
|
63,028,890
|
remove 0 and add ' ' in every number in pandas
|
<pre><code>0813466, 0813472, 0813473, 0813474, 0813475, 0813476, 0813477, 0813478, 0813479,
</code></pre>
<p>I want to remove all zero values in front of my number and add quotation marks to each number, so that my data will become</p>
<pre><code>'813466', '813472', '813473', '813474', '813475', '813476', '813477', '813478', '813479'
</code></pre>
| 63,028,906
| 2020-07-22T07:21:22.170000
| 1
| null | 1
| 37
|
python|pandas
|
<p>Remove traling <code>0</code> form left side by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.lstrip.html" rel="nofollow noreferrer"><code>Series.str.lstrip</code></a>, thank you @yatu for mention it:</p>
<blockquote>
<p>Those have to be strings though, otherwise they woulnd't have those 0s</p>
</blockquote>
<pre><code>df['col'].str.lstrip('0')
</code></pre>
| 2020-07-22T07:22:32.620000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reset_index.html
|
pandas.DataFrame.reset_index#
pandas.DataFrame.reset_index#
DataFrame.reset_index(level=None, *, drop=False, inplace=False, col_level=0, col_fill='', allow_duplicates=_NoDefault.no_default, names=None)[source]#
Remove traling 0 form left side by Series.str.lstrip, thank you @yatu for mention it:
Those have to be strings though, otherwise they woulnd't have those 0s
df['col'].str.lstrip('0')
Reset the index, or a level of it.
Reset the index of the DataFrame, and use the default one instead.
If the DataFrame has a MultiIndex, this method can remove one or more
levels.
Parameters
levelint, str, tuple, or list, default NoneOnly remove the given levels from the index. Removes all levels by
default.
dropbool, default FalseDo not try to insert index into dataframe columns. This resets
the index to the default integer index.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
col_levelint or str, default 0If the columns have multiple levels, determines which level the
labels are inserted into. By default it is inserted into the first
level.
col_fillobject, default ‘’If the columns have multiple levels, determines how the other
levels are named. If None then the index name is repeated.
allow_duplicatesbool, optional, default lib.no_defaultAllow duplicate column labels to be created.
New in version 1.5.0.
namesint, str or 1-dimensional list, default NoneUsing the given string, rename the DataFrame column which contains the
index data. If the DataFrame has a MultiIndex, this has to be a list or
tuple with length equal to the number of levels.
New in version 1.5.0.
Returns
DataFrame or NoneDataFrame with the new index or None if inplace=True.
See also
DataFrame.set_indexOpposite of reset_index.
DataFrame.reindexChange to new indices or expand indices.
DataFrame.reindex_likeChange to same indices as other DataFrame.
Examples
>>> df = pd.DataFrame([('bird', 389.0),
... ('bird', 24.0),
... ('mammal', 80.5),
... ('mammal', np.nan)],
... index=['falcon', 'parrot', 'lion', 'monkey'],
... columns=('class', 'max_speed'))
>>> df
class max_speed
falcon bird 389.0
parrot bird 24.0
lion mammal 80.5
monkey mammal NaN
When we reset the index, the old index is added as a column, and a
new sequential index is used:
>>> df.reset_index()
index class max_speed
0 falcon bird 389.0
1 parrot bird 24.0
2 lion mammal 80.5
3 monkey mammal NaN
We can use the drop parameter to avoid the old index being added as
a column:
>>> df.reset_index(drop=True)
class max_speed
0 bird 389.0
1 bird 24.0
2 mammal 80.5
3 mammal NaN
You can also use reset_index with MultiIndex.
>>> index = pd.MultiIndex.from_tuples([('bird', 'falcon'),
... ('bird', 'parrot'),
... ('mammal', 'lion'),
... ('mammal', 'monkey')],
... names=['class', 'name'])
>>> columns = pd.MultiIndex.from_tuples([('speed', 'max'),
... ('species', 'type')])
>>> df = pd.DataFrame([(389.0, 'fly'),
... ( 24.0, 'fly'),
... ( 80.5, 'run'),
... (np.nan, 'jump')],
... index=index,
... columns=columns)
>>> df
speed species
max type
class name
bird falcon 389.0 fly
parrot 24.0 fly
mammal lion 80.5 run
monkey NaN jump
Using the names parameter, choose a name for the index column:
>>> df.reset_index(names=['classes', 'names'])
classes names speed species
max type
0 bird falcon 389.0 fly
1 bird parrot 24.0 fly
2 mammal lion 80.5 run
3 mammal monkey NaN jump
If the index has multiple levels, we can reset a subset of them:
>>> df.reset_index(level='class')
class speed species
max type
name
falcon bird 389.0 fly
parrot bird 24.0 fly
lion mammal 80.5 run
monkey mammal NaN jump
If we are not dropping the index, by default, it is placed in the top
level. We can place it in another level:
>>> df.reset_index(level='class', col_level=1)
speed species
class max type
name
falcon bird 389.0 fly
parrot bird 24.0 fly
lion mammal 80.5 run
monkey mammal NaN jump
When the index is inserted under another level, we can specify under
which one with the parameter col_fill:
>>> df.reset_index(level='class', col_level=1, col_fill='species')
species speed species
class max type
name
falcon bird 389.0 fly
parrot bird 24.0 fly
lion mammal 80.5 run
monkey mammal NaN jump
If we specify a nonexistent level for col_fill, it is created:
>>> df.reset_index(level='class', col_level=1, col_fill='genus')
genus speed species
class max type
name
falcon bird 389.0 fly
parrot bird 24.0 fly
lion mammal 80.5 run
monkey mammal NaN jump
| 215
| 400
|
remove 0 and add ' ' in every number in pandas
0813466, 0813472, 0813473, 0813474, 0813475, 0813476, 0813477, 0813478, 0813479,
I want to remove all zero values in front of my number and add quotation marks to each number, so that my data will become
'813466', '813472', '813473', '813474', '813475', '813476', '813477', '813478', '813479'
|
64,317,427
|
check the difference in dates pandas and only keep certain IDs
|
<p>I'm trying got find IDs which have no dates below a certain timestamp. In other words, I am trying to find dates which are above a certain timestamp.</p>
<p>The below code works, but is there is a better way to do the same procedure?</p>
<pre><code>#pd.set_option('display.max_rows', 1000)
import pandas as pd
from datetime import date, timedelta
last_y_days = pd.datetime.today() - timedelta(days=60)
tmp_df = df[['ID','TIMESTAMP']].drop_duplicates()
tmp_df['result'] = tmp_df['TIMESTAMP'] < last_y_days
foobar = tmp_df.groupby('ID')['result'].unique().reset_index()
foobar[foobar['result'].apply(lambda x: True not in x)]
</code></pre>
<p>If we assume this to be the data, I want those IDs which have no timestamps before the last 60 days. In this case, the only answer is <code>1</code></p>
<pre><code> ID TIMESTAMP
1 1 2020-08-26
3 2 2020-04-18
4 2 2020-03-31
7 2 2020-01-10
10 2 2020-05-13
14 2 2020-02-24
16 2 2020-02-20
19 2 2020-08-03
34 3 2020-09-29
54 3 2020-08-14
55 3 2020-10-01
70 4 2020-01-25
72 4 2020-04-22
73 4 2020-09-01
75 4 2020-03-03
76 4 2020-07-21
79 4 2020-04-20
81 4 2020-04-28
83 4 2020-08-22
85 4 2020-06-03
</code></pre>
| 64,317,741
| 2020-10-12T11:57:18.813000
| 1
| null | 1
| 38
|
python|pandas
|
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.setdiff1d.html" rel="nofollow noreferrer"><code>numpy.setdiff1d</code></a> with filtered <code>ID</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>df['TIMESTAMP'] = pd.to_datetime(df['TIMESTAMP'])
from datetime import date, timedelta
last_y_days = pd.datetime.today() - timedelta(days=60)
print (last_y_days)
ids = np.setdiff1d(df['ID'], df.loc[df['TIMESTAMP'] < last_y_days, 'ID'].unique()).tolist()
print (ids)
[1, 3]
</code></pre>
<p>Or test if at least one <code>True</code> per groups by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.any.html" rel="nofollow noreferrer"><code>GroupBy.any</code></a> for mask and then filter not matched index values:</p>
<pre><code>m = (df['TIMESTAMP'] < last_y_days).groupby(df['ID']).any()
ids = m.index[~m].tolist()
print (ids)
[1, 3]
</code></pre>
| 2020-10-12T12:17:43.380000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.compare.html
|
pandas.DataFrame.compare#
pandas.DataFrame.compare#
DataFrame.compare(other, align_axis=1, keep_shape=False, keep_equal=False, result_names=('self', 'other'))[source]#
Compare to another DataFrame and show the differences.
New in version 1.1.0.
Parameters
otherDataFrameObject to compare with.
align_axis{0 or ‘index’, 1 or ‘columns’}, default 1Determine which axis to align the comparison on.
0, or ‘index’Resulting differences are stacked verticallywith rows drawn alternately from self and other.
1, or ‘columns’Resulting differences are aligned horizontallywith columns drawn alternately from self and other.
Use numpy.setdiff1d with filtered ID in DataFrame.loc:
df['TIMESTAMP'] = pd.to_datetime(df['TIMESTAMP'])
from datetime import date, timedelta
last_y_days = pd.datetime.today() - timedelta(days=60)
print (last_y_days)
ids = np.setdiff1d(df['ID'], df.loc[df['TIMESTAMP'] < last_y_days, 'ID'].unique()).tolist()
print (ids)
[1, 3]
Or test if at least one True per groups by GroupBy.any for mask and then filter not matched index values:
m = (df['TIMESTAMP'] < last_y_days).groupby(df['ID']).any()
ids = m.index[~m].tolist()
print (ids)
[1, 3]
keep_shapebool, default FalseIf true, all rows and columns are kept.
Otherwise, only the ones with different values are kept.
keep_equalbool, default FalseIf true, the result keeps values that are equal.
Otherwise, equal values are shown as NaNs.
result_namestuple, default (‘self’, ‘other’)Set the dataframes names in the comparison.
New in version 1.5.0.
Returns
DataFrameDataFrame that shows the differences stacked side by side.
The resulting index will be a MultiIndex with ‘self’ and ‘other’
stacked alternately at the inner level.
Raises
ValueErrorWhen the two DataFrames don’t have identical labels or shape.
See also
Series.compareCompare with another Series and show differences.
DataFrame.equalsTest whether two objects contain the same elements.
Notes
Matching NaNs will not appear as a difference.
Can only compare identically-labeled
(i.e. same shape, identical row and column labels) DataFrames
Examples
>>> df = pd.DataFrame(
... {
... "col1": ["a", "a", "b", "b", "a"],
... "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
... "col3": [1.0, 2.0, 3.0, 4.0, 5.0]
... },
... columns=["col1", "col2", "col3"],
... )
>>> df
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
>>> df2 = df.copy()
>>> df2.loc[0, 'col1'] = 'c'
>>> df2.loc[2, 'col3'] = 4.0
>>> df2
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
Align the differences on columns
>>> df.compare(df2)
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
Assign result_names
>>> df.compare(df2, result_names=("left", "right"))
col1 col3
left right left right
0 a c NaN NaN
2 NaN NaN 3.0 4.0
Stack the differences on rows
>>> df.compare(df2, align_axis=0)
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
Keep the equal values
>>> df.compare(df2, keep_equal=True)
col1 col3
self other self other
0 a c 1.0 1.0
2 b b 3.0 4.0
Keep all original rows and columns
>>> df.compare(df2, keep_shape=True)
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
Keep all original rows and columns and also all original values
>>> df.compare(df2, keep_shape=True, keep_equal=True)
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 630
| 1,179
|
check the difference in dates pandas and only keep certain IDs
I'm trying got find IDs which have no dates below a certain timestamp. In other words, I am trying to find dates which are above a certain timestamp.
The below code works, but is there is a better way to do the same procedure?
#pd.set_option('display.max_rows', 1000)
import pandas as pd
from datetime import date, timedelta
last_y_days = pd.datetime.today() - timedelta(days=60)
tmp_df = df[['ID','TIMESTAMP']].drop_duplicates()
tmp_df['result'] = tmp_df['TIMESTAMP'] < last_y_days
foobar = tmp_df.groupby('ID')['result'].unique().reset_index()
foobar[foobar['result'].apply(lambda x: True not in x)]
If we assume this to be the data, I want those IDs which have no timestamps before the last 60 days. In this case, the only answer is 1
ID TIMESTAMP
1 1 2020-08-26
3 2 2020-04-18
4 2 2020-03-31
7 2 2020-01-10
10 2 2020-05-13
14 2 2020-02-24
16 2 2020-02-20
19 2 2020-08-03
34 3 2020-09-29
54 3 2020-08-14
55 3 2020-10-01
70 4 2020-01-25
72 4 2020-04-22
73 4 2020-09-01
75 4 2020-03-03
76 4 2020-07-21
79 4 2020-04-20
81 4 2020-04-28
83 4 2020-08-22
85 4 2020-06-03
|
70,246,393
|
Skipping variable number of C-style comment lines when using pandas read_table
|
<p>The pandas <code>read_table()</code> function enables us to read <code>*.tab</code> file and the parameter <code>skiprow</code> provides flexible ways to retrieve the data. However, I'm in trouble when I need to read <code>*.tab</code> file in a loop but the number of the rows need to skip is random. For example, the contents need to skip are started with <code>/*</code> and ended with <code>*/</code> , such as:</p>
<pre><code>/*
...
The number of rows need to skip is random
...
*/
</code></pre>
<p>So how do I find the line of the <code>*/</code> and then use the parameter <code>skiprow</code>?</p>
| 70,246,453
| 2021-12-06T13:37:44.823000
| 1
| null | 1
| 42
|
python|pandas
|
<p>Consume rows until the current row starts with <code>'*/'</code>:</p>
<pre><code>with open('data.txt') as fp:
for row in fp:
if row.startswith('*/'):
df = pd.read_table(fp)
</code></pre>
| 2021-12-06T13:43:48.290000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.read_table.html
|
pandas.read_table#
Consume rows until the current row starts with '*/':
with open('data.txt') as fp:
for row in fp:
if row.startswith('*/'):
df = pd.read_table(fp)
pandas.read_table#
pandas.read_table(filepath_or_buffer, *, sep=_NoDefault.no_default, delimiter=None, header='infer', names=_NoDefault.no_default, index_col=None, usecols=None, squeeze=None, prefix=_NoDefault.no_default, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal='.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, encoding_errors='strict', dialect=None, error_bad_lines=None, warn_bad_lines=None, on_bad_lines=None, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None, storage_options=None)[source]#
Read general delimited file into DataFrame.
Also supports optionally iterating or breaking of the file
into chunks.
Additional help can be found in the online docs for
IO Tools.
Parameters
filepath_or_bufferstr, path object or file-like objectAny valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.csv.
If you want to pass in a path object, pandas accepts any os.PathLike.
By file-like object, we refer to objects with a read() method, such as
a file handle (e.g. via builtin open function) or StringIO.
sepstr, default ‘\t’ (tab-stop)Delimiter to use. If sep is None, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will
be used and automatically detect the separator by Python’s builtin sniffer
tool, csv.Sniffer. In addition, separators longer than 1 character and
different from '\s+' will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: '\r\t'.
delimiterstr, default NoneAlias for sep.
headerint, list of int, None, default ‘infer’Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names
are passed the behavior is identical to header=0 and column
names are inferred from the first line of the file, if column
names are passed explicitly then the behavior is identical to
header=None. Explicitly pass header=0 to be able to
replace existing names. The header can be a list of integers that
specify row locations for a multi-index on the columns
e.g. [0,1,3]. Intervening rows that are not specified will be
skipped (e.g. 2 in this example is skipped). Note that this
parameter ignores commented lines and empty lines if
skip_blank_lines=True, so header=0 denotes the first line of
data rather than the first line of the file.
namesarray-like, optionalList of column names to use. If the file contains a header row,
then you should explicitly pass header=0 to override the column names.
Duplicates in this list are not allowed.
index_colint, str, sequence of int / str, or False, optional, default NoneColumn(s) to use as the row labels of the DataFrame, either given as
string name or column index. If a sequence of int / str is given, a
MultiIndex is used.
Note: index_col=False can be used to force pandas to not use the first
column as the index, e.g. when you have a malformed file with delimiters at
the end of each line.
usecolslist-like or callable, optionalReturn a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in names or
inferred from the document header row(s). If names are given, the document
header row(s) are not taken into account. For example, a valid list-like
usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
Element order is ignored, so usecols=[0, 1] is the same as [1, 0].
To instantiate a DataFrame from data with element order preserved use
pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns
in ['foo', 'bar'] order or
pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]
for ['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column
names, returning names where the callable function evaluates to True. An
example of a valid callable argument would be lambda x: x.upper() in
['AAA', 'BBB', 'DDD']. Using this parameter results in much faster
parsing time and lower memory usage.
squeezebool, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to read_table to squeeze
the data.
prefixstr, optionalPrefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
mangle_dupe_colsbool, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than
‘X’…’X’. Passing in False will cause data to be overwritten if there
are duplicate names in the columns.
Deprecated since version 1.5.0: Not implemented, and a new argument to specify the pattern for the
names of duplicated columns will be added instead
dtypeType name or dict of column -> type, optionalData type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32,
‘c’: ‘Int64’}
Use str or object together with suitable na_values settings
to preserve and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
New in version 1.5.0: Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.
engine{‘c’, ‘python’, ‘pyarrow’}, optionalParser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.
New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features
are unsupported, or may not work correctly, with this engine.
convertersdict, optionalDict of functions for converting values in certain columns. Keys can either
be integers or column labels.
true_valueslist, optionalValues to consider as True.
false_valueslist, optionalValues to consider as False.
skipinitialspacebool, default FalseSkip spaces after delimiter.
skiprowslist-like, int or callable, optionalLine numbers to skip (0-indexed) or number of lines to skip (int)
at the start of the file.
If callable, the callable function will be evaluated against the row
indices, returning True if the row should be skipped and False otherwise.
An example of a valid callable argument would be lambda x: x in [0, 2].
skipfooterint, default 0Number of lines at bottom of file to skip (Unsupported with engine=’c’).
nrowsint, optionalNumber of rows of file to read. Useful for reading pieces of large files.
na_valuesscalar, str, list-like, or dict, optionalAdditional strings to recognize as NA/NaN. If dict passed, specific
per-column NA values. By default the following values are interpreted as
NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’,
‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’,
‘nan’, ‘null’.
keep_default_nabool, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterbool, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verbosebool, default FalseIndicate number of NA values placed in non-numeric columns.
skip_blank_linesbool, default TrueIf True, skip over blank lines rather than interpreting as NaN values.
parse_datesbool or list of int or names or list of lists or dict, default FalseThe behavior is as follows:
boolean. If True -> try parsing the index.
list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
each as a separate date column.
list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as
a single date column.
dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call
result ‘foo’
If a column or index cannot be represented as an array of datetimes,
say because of an unparsable value or a mixture of timezones, the column
or index will be returned unaltered as an object data type. For
non-standard datetime parsing, use pd.to_datetime after
pd.read_csv. To parse an index or column with a mixture of timezones,
specify date_parser to be a partially-applied
pandas.to_datetime() with utc=True. See
Parsing a CSV with mixed timezones for more.
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_formatbool, default FalseIf True and parse_dates is enabled, pandas will attempt to infer the
format of the datetime strings in the columns, and if it can be inferred,
switch to a faster method of parsing them. In some cases this can increase
the parsing speed by 5-10x.
keep_date_colbool, default FalseIf True and parse_dates specifies combining multiple columns then
keep the original columns.
date_parserfunction, optionalFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. Pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by parse_dates) as arguments; 2) concatenate (row-wise) the
string values from the columns defined by parse_dates into a single array
and pass that; and 3) call date_parser once for each row using one or
more strings (corresponding to the columns defined by parse_dates) as
arguments.
dayfirstbool, default FalseDD/MM format dates, international and European format.
cache_datesbool, default TrueIf True, use a cache of unique, converted dates to apply the datetime
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
New in version 0.25.0.
iteratorbool, default FalseReturn TextFileReader object for iteration or getting chunks with
get_chunk().
Changed in version 1.2: TextFileReader is a context manager.
chunksizeint, optionalReturn TextFileReader object for iteration.
See the IO Tools docs
for more information on iterator and chunksize.
Changed in version 1.2: TextFileReader is a context manager.
compressionstr or dict, default ‘infer’For on-the-fly decompression of on-disk data. If ‘infer’ and ‘filepath_or_buffer’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in.
Set to None for no decompression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdDecompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for Zstandard decompression using a
custom compression dictionary:
compression={'method': 'zstd', 'dict_data': my_compression_dict}.
New in version 1.5.0: Added support for .tar files.
Changed in version 1.4.0: Zstandard support.
thousandsstr, optionalThousands separator.
decimalstr, default ‘.’Character to recognize as decimal point (e.g. use ‘,’ for European data).
lineterminatorstr (length 1), optionalCharacter to break file into lines. Only valid with C parser.
quotecharstr (length 1), optionalThe character used to denote the start and end of a quoted item. Quoted
items can include the delimiter and it will be ignored.
quotingint or csv.QUOTE_* instance, default 0Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
doublequotebool, default TrueWhen quotechar is specified and quoting is not QUOTE_NONE, indicate
whether or not to interpret two consecutive quotechar elements INSIDE a
field as a single quotechar element.
escapecharstr (length 1), optionalOne-character string used to escape other characters.
commentstr, optionalIndicates remainder of line should not be parsed. If found at the beginning
of a line, the line will be ignored altogether. This parameter must be a
single character. Like empty lines (as long as skip_blank_lines=True),
fully commented lines are ignored by the parameter header but not by
skiprows. For example, if comment='#', parsing
#empty\na,b,c\n1,2,3 with header=0 will result in ‘a,b,c’ being
treated as the header.
encodingstr, optionalEncoding to use for UTF when reading/writing (ex. ‘utf-8’). List of Python
standard encodings .
Changed in version 1.2: When encoding is None, errors="replace" is passed to
open(). Otherwise, errors="strict" is passed to open().
This behavior was previously only the case for engine="python".
Changed in version 1.3.0: encoding_errors is a new argument. encoding has no longer an
influence on how encoding errors are handled.
encoding_errorsstr, optional, default “strict”How encoding errors are treated. List of possible values .
New in version 1.3.0.
dialectstr or csv.Dialect, optionalIf provided, this parameter will override values (default or not) for the
following parameters: delimiter, doublequote, escapechar,
skipinitialspace, quotechar, and quoting. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
error_bad_linesbool, optional, default NoneLines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned.
If False, then these “bad lines” will be dropped from the DataFrame that is
returned.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
warn_bad_linesbool, optional, default NoneIf error_bad_lines is False, and warn_bad_lines is True, a warning for each
“bad line” will be output.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
on_bad_lines{‘error’, ‘warn’, ‘skip’} or callable, default ‘error’Specifies what to do upon encountering a bad line (a line with too many fields).
Allowed values are :
‘error’, raise an Exception when a bad line is encountered.
‘warn’, raise a warning when a bad line is encountered and skip that line.
‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0.
New in version 1.4.0:
callable, function with signature
(bad_line: list[str]) -> list[str] | None that will process a single
bad line. bad_line is a list of strings split by the sep.
If the function returns None, the bad line will be ignored.
If the function returns a new list of strings with more elements than
expected, a ParserWarning will be emitted while dropping extra elements.
Only supported when engine="python"
delim_whitespacebool, default FalseSpecifies whether or not whitespace (e.g. ' ' or ' ') will be
used as the sep. Equivalent to setting sep='\s+'. If this option
is set to True, nothing should be passed in for the delimiter
parameter.
low_memorybool, default TrueInternally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
Note that the entire file is read into a single DataFrame regardless,
use the chunksize or iterator parameter to return the data in chunks.
(Only valid with C parser).
memory_mapbool, default FalseIf a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this
option can improve performance because there is no longer any I/O overhead.
float_precisionstr, optionalSpecifies which converter the C engine should use for floating-point
values. The options are None or ‘high’ for the ordinary converter,
‘legacy’ for the original lower precision pandas converter, and
‘round_trip’ for the round-trip converter.
Changed in version 1.2.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.
Returns
DataFrame or TextParserA comma-separated values (csv) file is returned as two-dimensional
data structure with labeled axes.
See also
DataFrame.to_csvWrite DataFrame to a comma-separated values (csv) file.
read_csvRead a comma-separated values (csv) file into DataFrame.
read_fwfRead a table of fixed-width formatted lines into DataFrame.
Examples
>>> pd.read_table('data.csv')
| 20
| 189
|
Skipping variable number of C-style comment lines when using pandas read_table
The pandas read_table() function enables us to read *.tab file and the parameter skiprow provides flexible ways to retrieve the data. However, I'm in trouble when I need to read *.tab file in a loop but the number of the rows need to skip is random. For example, the contents need to skip are started with /* and ended with */ , such as:
/*
...
The number of rows need to skip is random
...
*/
So how do I find the line of the */ and then use the parameter skiprow?
|
63,341,839
|
Pandas - Replacing text in column with dummy values
|
<p>I have a Dataframe that has some customer info as shown below:</p>
<pre><code>Customer Name, Purchase Date
Kevin, 2020-01-10
Scott, 2020-02-01
Mark, 2020-04-01
Peter, 2020-06-12
</code></pre>
<p>I would like to replace the Customer Name column with dummy values such as "<code>Customer 1</code>", "<code>Customer 2</code>" and so on. Expected output:</p>
<pre><code>Customer Name, Purchase Date
Customer 1, 2020-01-10
Customer 2, 2020-02-01
Customer 3, 2020-04-01
Customer 4, 2020-06-12
</code></pre>
<p>I would like this to be based on the DataFrame shape</p>
| 63,341,872
| 2020-08-10T14:07:42.290000
| 3
| null | 1
| 48
|
pandas
|
<p>If all values are unique use <code>index</code> values converted to strings:</p>
<pre><code>df['Customer Name'] = 'Customer ' + (df.index + 1).astype(str)
print (df)
Customer Name Purchase Date
0 Customer 1 2020-01-10
1 Customer 2 2020-02-01
2 Customer 3 2020-04-01
3 Customer 4 2020-06-12
</code></pre>
<p>If need convert unique values of column <code>Customer Name</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.factorize.html" rel="nofollow noreferrer"><code>factorize</code></a>:</p>
<pre><code>s = pd.Series((pd.factorize(df['Customer Name'])[0] + 1), index=df.index).astype(str)
df['Customer Name'] = 'Customer ' + s
print (df)
Customer Name Purchase Date
0 Customer 1 2020-01-10
1 Customer 2 2020-02-01
2 Customer 3 2020-04-01
3 Customer 4 2020-06-12
</code></pre>
<p>Difference is possible see in duplicated values:</p>
<pre><code>print (df)
Customer Name Purchase Date
0 Mark 2020-01-10
1 Scott 2020-02-01
2 Mark 2020-04-01
3 Peter 2020-06-12
df['Customer Name1'] = 'Customer ' + (df.index + 1).astype(str)
s = pd.Series((pd.factorize(df['Customer Name'])[0] + 1), index=df.index).astype(str)
df['Customer Name2'] = 'Customer ' + s
print (df)
Customer Name Purchase Date Customer Name1 Customer Name2
0 Mark 2020-01-10 Customer 1 Customer 1
1 Scott 2020-02-01 Customer 2 Customer 2
2 Mark 2020-04-01 Customer 3 Customer 1
3 Peter 2020-06-12 Customer 4 Customer 3
</code></pre>
| 2020-08-10T14:09:51.810000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.get_dummies.html
|
If all values are unique use index values converted to strings:
df['Customer Name'] = 'Customer ' + (df.index + 1).astype(str)
print (df)
Customer Name Purchase Date
0 Customer 1 2020-01-10
1 Customer 2 2020-02-01
2 Customer 3 2020-04-01
3 Customer 4 2020-06-12
If need convert unique values of column Customer Name use factorize:
s = pd.Series((pd.factorize(df['Customer Name'])[0] + 1), index=df.index).astype(str)
df['Customer Name'] = 'Customer ' + s
print (df)
Customer Name Purchase Date
0 Customer 1 2020-01-10
1 Customer 2 2020-02-01
2 Customer 3 2020-04-01
3 Customer 4 2020-06-12
Difference is possible see in duplicated values:
print (df)
Customer Name Purchase Date
0 Mark 2020-01-10
1 Scott 2020-02-01
2 Mark 2020-04-01
3 Peter 2020-06-12
df['Customer Name1'] = 'Customer ' + (df.index + 1).astype(str)
s = pd.Series((pd.factorize(df['Customer Name'])[0] + 1), index=df.index).astype(str)
df['Customer Name2'] = 'Customer ' + s
print (df)
Customer Name Purchase Date Customer Name1 Customer Name2
0 Mark 2020-01-10 Customer 1 Customer 1
1 Scott 2020-02-01 Customer 2 Customer 2
2 Mark 2020-04-01 Customer 3 Customer 1
3 Peter 2020-06-12 Customer 4 Customer 3
| 0
| 1,354
|
Pandas - Replacing text in column with dummy values
I have a Dataframe that has some customer info as shown below:
Customer Name, Purchase Date
Kevin, 2020-01-10
Scott, 2020-02-01
Mark, 2020-04-01
Peter, 2020-06-12
I would like to replace the Customer Name column with dummy values such as "Customer 1", "Customer 2" and so on. Expected output:
Customer Name, Purchase Date
Customer 1, 2020-01-10
Customer 2, 2020-02-01
Customer 3, 2020-04-01
Customer 4, 2020-06-12
I would like this to be based on the DataFrame shape
|
64,203,669
|
Filter pandas column based on frequency of occurrence
|
<p>My df:</p>
<pre><code>data = [
{'Part': 'A', 'Value': 10, 'Delivery': 10},
{'Part': 'B', 'Value': 12, 'Delivery': 8.5},
{'Part': 'C', 'Value': 10, 'Delivery': 10.1},
{'Part': 'D', 'Value': 10, 'Delivery': 10.3},
{'Part': 'E', 'Value': 11, 'Delivery': 9.2},
{'Part': 'F', 'Value': 15, 'Delivery': 7.3},
{'Part': 'G', 'Value': 10, 'Delivery': 10.1},
{'Part': 'H', 'Value': 12, 'Delivery': 8.1},
{'Part': 'I', 'Value': 12, 'Delivery': 8.0},
{'Part': 'J', 'Value': 10, 'Delivery': 10.2},
{'Part': 'K', 'Value': 8, 'Delivery': 12.5}
]
df = pd.DataFrame(data)
</code></pre>
<p>I wish to filter a dataframe out of given dataframe so that it contain only the most frequent occurring "value".</p>
<p>Expected output:</p>
<pre><code>data = [
{'Part': 'A', 'Value': 10, 'Delivery': 10},
{'Part': 'C', 'Value': 10, 'Delivery': 10.1},
{'Part': 'D', 'Value': 10, 'Delivery': 10.3},
{'Part': 'G', 'Value': 10, 'Delivery': 10.1},
{'Part': 'J', 'Value': 10, 'Delivery': 10.2}
]
df_output = pd.DataFrame(data)
</code></pre>
<p>is there any way to do this?</p>
| 64,203,693
| 2020-10-05T06:52:10.883000
| 1
| null | 1
| 49
|
pandas
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mode.html" rel="nofollow noreferrer"><code>Series.mode</code></a> and seelct first value by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.iat.html" rel="nofollow noreferrer"><code>Series.iat</code></a>:</p>
<pre><code>df1 = df[df['Value'].eq(df['Value'].mode().iat[0])]
</code></pre>
<p>Or compare by first index value in Series created by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a>, because by default values are sorted by counts:</p>
<pre><code>df1 = df[df['Value'].eq(df['Value'].value_counts().index[0])]
print (df1)
Part Value Delivery
0 A 10 10.0
2 C 10 10.1
3 D 10 10.3
6 G 10 10.1
9 J 10 10.2
</code></pre>
| 2020-10-05T06:53:50.583000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html
|
pandas.crosstab#
pandas.crosstab#
pandas.crosstab(index, columns, values=None, rownames=None, colnames=None, aggfunc=None, margins=False, margins_name='All', dropna=True, normalize=False)[source]#
Compute a simple cross tabulation of two (or more) factors.
By default, computes a frequency table of the factors unless an
array of values and an aggregation function are passed.
Parameters
indexarray-like, Series, or list of arrays/SeriesValues to group by in the rows.
columnsarray-like, Series, or list of arrays/SeriesValues to group by in the columns.
Use boolean indexing with Series.mode and seelct first value by Series.iat:
df1 = df[df['Value'].eq(df['Value'].mode().iat[0])]
Or compare by first index value in Series created by Series.value_counts, because by default values are sorted by counts:
df1 = df[df['Value'].eq(df['Value'].value_counts().index[0])]
print (df1)
Part Value Delivery
0 A 10 10.0
2 C 10 10.1
3 D 10 10.3
6 G 10 10.1
9 J 10 10.2
valuesarray-like, optionalArray of values to aggregate according to the factors.
Requires aggfunc be specified.
rownamessequence, default NoneIf passed, must match number of row arrays passed.
colnamessequence, default NoneIf passed, must match number of column arrays passed.
aggfuncfunction, optionalIf specified, requires values be specified as well.
marginsbool, default FalseAdd row/column margins (subtotals).
margins_namestr, default ‘All’Name of the row/column that will contain the totals
when margins is True.
dropnabool, default TrueDo not include columns whose entries are all NaN.
normalizebool, {‘all’, ‘index’, ‘columns’}, or {0,1}, default FalseNormalize by dividing all values by the sum of values.
If passed ‘all’ or True, will normalize over all values.
If passed ‘index’ will normalize over each row.
If passed ‘columns’ will normalize over each column.
If margins is True, will also normalize margin values.
Returns
DataFrameCross tabulation of the data.
See also
DataFrame.pivotReshape data based on column values.
pivot_tableCreate a pivot table as a DataFrame.
Notes
Any Series passed will have their name attributes used unless row or column
names for the cross-tabulation are specified.
Any input passed containing Categorical data will have all of its
categories included in the cross-tabulation, even if the actual data does
not contain any instances of a particular category.
In the event that there aren’t overlapping indexes an empty DataFrame will
be returned.
Reference the user guide for more examples.
Examples
>>> a = np.array(["foo", "foo", "foo", "foo", "bar", "bar",
... "bar", "bar", "foo", "foo", "foo"], dtype=object)
>>> b = np.array(["one", "one", "one", "two", "one", "one",
... "one", "two", "two", "two", "one"], dtype=object)
>>> c = np.array(["dull", "dull", "shiny", "dull", "dull", "shiny",
... "shiny", "dull", "shiny", "shiny", "shiny"],
... dtype=object)
>>> pd.crosstab(a, [b, c], rownames=['a'], colnames=['b', 'c'])
b one two
c dull shiny dull shiny
a
bar 1 2 1 0
foo 2 2 1 2
Here ‘c’ and ‘f’ are not represented in the data and will not be
shown in the output because dropna is True by default. Set
dropna=False to preserve categories with no data.
>>> foo = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c'])
>>> bar = pd.Categorical(['d', 'e'], categories=['d', 'e', 'f'])
>>> pd.crosstab(foo, bar)
col_0 d e
row_0
a 1 0
b 0 1
>>> pd.crosstab(foo, bar, dropna=False)
col_0 d e f
row_0
a 1 0 0
b 0 1 0
c 0 0 0
| 562
| 1,031
|
Filter pandas column based on frequency of occurrence
My df:
data = [
{'Part': 'A', 'Value': 10, 'Delivery': 10},
{'Part': 'B', 'Value': 12, 'Delivery': 8.5},
{'Part': 'C', 'Value': 10, 'Delivery': 10.1},
{'Part': 'D', 'Value': 10, 'Delivery': 10.3},
{'Part': 'E', 'Value': 11, 'Delivery': 9.2},
{'Part': 'F', 'Value': 15, 'Delivery': 7.3},
{'Part': 'G', 'Value': 10, 'Delivery': 10.1},
{'Part': 'H', 'Value': 12, 'Delivery': 8.1},
{'Part': 'I', 'Value': 12, 'Delivery': 8.0},
{'Part': 'J', 'Value': 10, 'Delivery': 10.2},
{'Part': 'K', 'Value': 8, 'Delivery': 12.5}
]
df = pd.DataFrame(data)
I wish to filter a dataframe out of given dataframe so that it contain only the most frequent occurring "value".
Expected output:
data = [
{'Part': 'A', 'Value': 10, 'Delivery': 10},
{'Part': 'C', 'Value': 10, 'Delivery': 10.1},
{'Part': 'D', 'Value': 10, 'Delivery': 10.3},
{'Part': 'G', 'Value': 10, 'Delivery': 10.1},
{'Part': 'J', 'Value': 10, 'Delivery': 10.2}
]
df_output = pd.DataFrame(data)
is there any way to do this?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.