title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas arrays, scalars, and data types | pandas arrays, scalars, and data types | Objects#
For most data types, pandas uses NumPy arrays as the concrete
objects contained with a Index, Series, or
DataFrame.
For some data types, pandas extends NumPy’s type system. String aliases for these types
can be found at dtypes.
Kind of Data
pandas Data Type
Scalar
Array
TZ-aware datetime
DatetimeTZDtype
Timestamp
Datetimes
Timedeltas
(none)
Timedelta
Timedeltas
Period (time spans)
PeriodDtype
Period
Periods
Intervals
IntervalDtype
Interval
Intervals
Nullable Integer
Int64Dtype, …
(none)
Nullable integer
Categorical
CategoricalDtype
(none)
Categoricals
Sparse
SparseDtype
(none)
Sparse
Strings
StringDtype
str
Strings
Boolean (with NA)
BooleanDtype
bool
Nullable Boolean
PyArrow
ArrowDtype
Python Scalars or NA
PyArrow
pandas and third-party libraries can extend NumPy’s type system (see Extension types).
The top-level array() method can be used to create a new array, which may be
stored in a Series, Index, or as a column in a DataFrame.
array(data[, dtype, copy])
Create an array.
PyArrow#
Warning
This feature is experimental, and the API can change in a future release without warning.
The arrays.ArrowExtensionArray is backed by a pyarrow.ChunkedArray with a
pyarrow.DataType instead of a NumPy array and data type. The .dtype of a arrays.ArrowExtensionArray
is an ArrowDtype.
Pyarrow provides similar array and data type
support as NumPy including first-class nullability support for all data types, immutability and more.
Note
For string types (pyarrow.string(), string[pyarrow]), PyArrow support is still facilitated
by arrays.ArrowStringArray and StringDtype("pyarrow"). See the string section
below.
While individual values in an arrays.ArrowExtensionArray are stored as a PyArrow objects, scalars are returned
as Python scalars corresponding to the data type, e.g. a PyArrow int64 will be returned as Python int, or NA for missing
values.
arrays.ArrowExtensionArray(values)
Pandas ExtensionArray backed by a PyArrow ChunkedArray.
ArrowDtype(pyarrow_dtype)
An ExtensionDtype for PyArrow data types.
Datetimes#
NumPy cannot natively represent timezone-aware datetimes. pandas supports this
with the arrays.DatetimeArray extension array, which can hold timezone-naive
or timezone-aware values.
Timestamp, a subclass of datetime.datetime, is pandas’
scalar type for timezone-naive or timezone-aware datetime data.
Timestamp([ts_input, freq, tz, unit, year, ...])
Pandas replacement for python datetime.datetime object.
Properties#
Timestamp.asm8
Return numpy datetime64 format in nanoseconds.
Timestamp.day
Timestamp.dayofweek
Return day of the week.
Timestamp.day_of_week
Return day of the week.
Timestamp.dayofyear
Return the day of the year.
Timestamp.day_of_year
Return the day of the year.
Timestamp.days_in_month
Return the number of days in the month.
Timestamp.daysinmonth
Return the number of days in the month.
Timestamp.fold
Timestamp.hour
Timestamp.is_leap_year
Return True if year is a leap year.
Timestamp.is_month_end
Return True if date is last day of month.
Timestamp.is_month_start
Return True if date is first day of month.
Timestamp.is_quarter_end
Return True if date is last day of the quarter.
Timestamp.is_quarter_start
Return True if date is first day of the quarter.
Timestamp.is_year_end
Return True if date is last day of the year.
Timestamp.is_year_start
Return True if date is first day of the year.
Timestamp.max
Timestamp.microsecond
Timestamp.min
Timestamp.minute
Timestamp.month
Timestamp.nanosecond
Timestamp.quarter
Return the quarter of the year.
Timestamp.resolution
Timestamp.second
Timestamp.tz
Alias for tzinfo.
Timestamp.tzinfo
Timestamp.value
Timestamp.week
Return the week number of the year.
Timestamp.weekofyear
Return the week number of the year.
Timestamp.year
Methods#
Timestamp.astimezone(tz)
Convert timezone-aware Timestamp to another time zone.
Timestamp.ceil(freq[, ambiguous, nonexistent])
Return a new Timestamp ceiled to this resolution.
Timestamp.combine(date, time)
Combine date, time into datetime with same date and time fields.
Timestamp.ctime
Return ctime() style string.
Timestamp.date
Return date object with same year, month and day.
Timestamp.day_name
Return the day name of the Timestamp with specified locale.
Timestamp.dst
Return self.tzinfo.dst(self).
Timestamp.floor(freq[, ambiguous, nonexistent])
Return a new Timestamp floored to this resolution.
Timestamp.freq
Timestamp.freqstr
Return the total number of days in the month.
Timestamp.fromordinal(ordinal[, freq, tz])
Construct a timestamp from a a proleptic Gregorian ordinal.
Timestamp.fromtimestamp(ts)
Transform timestamp[, tz] to tz's local time from POSIX timestamp.
Timestamp.isocalendar
Return a 3-tuple containing ISO year, week number, and weekday.
Timestamp.isoformat
Return the time formatted according to ISO 8610.
Timestamp.isoweekday()
Return the day of the week represented by the date.
Timestamp.month_name
Return the month name of the Timestamp with specified locale.
Timestamp.normalize
Normalize Timestamp to midnight, preserving tz information.
Timestamp.now([tz])
Return new Timestamp object representing current time local to tz.
Timestamp.replace([year, month, day, hour, ...])
Implements datetime.replace, handles nanoseconds.
Timestamp.round(freq[, ambiguous, nonexistent])
Round the Timestamp to the specified resolution.
Timestamp.strftime(format)
Return a formatted string of the Timestamp.
Timestamp.strptime(string, format)
Function is not implemented.
Timestamp.time
Return time object with same time but with tzinfo=None.
Timestamp.timestamp
Return POSIX timestamp as float.
Timestamp.timetuple
Return time tuple, compatible with time.localtime().
Timestamp.timetz
Return time object with same time and tzinfo.
Timestamp.to_datetime64
Return a numpy.datetime64 object with 'ns' precision.
Timestamp.to_numpy
Convert the Timestamp to a NumPy datetime64.
Timestamp.to_julian_date()
Convert TimeStamp to a Julian Date.
Timestamp.to_period
Return an period of which this timestamp is an observation.
Timestamp.to_pydatetime
Convert a Timestamp object to a native Python datetime object.
Timestamp.today([tz])
Return the current time in the local timezone.
Timestamp.toordinal
Return proleptic Gregorian ordinal.
Timestamp.tz_convert(tz)
Convert timezone-aware Timestamp to another time zone.
Timestamp.tz_localize(tz[, ambiguous, ...])
Localize the Timestamp to a timezone.
Timestamp.tzname
Return self.tzinfo.tzname(self).
Timestamp.utcfromtimestamp(ts)
Construct a naive UTC datetime from a POSIX timestamp.
Timestamp.utcnow()
Return a new Timestamp representing UTC day and time.
Timestamp.utcoffset
Return self.tzinfo.utcoffset(self).
Timestamp.utctimetuple
Return UTC time tuple, compatible with time.localtime().
Timestamp.weekday()
Return the day of the week represented by the date.
A collection of timestamps may be stored in a arrays.DatetimeArray.
For timezone-aware data, the .dtype of a arrays.DatetimeArray is a
DatetimeTZDtype. For timezone-naive data, np.dtype("datetime64[ns]")
is used.
If the data are timezone-aware, then every value in the array must have the same timezone.
arrays.DatetimeArray(values[, dtype, freq, copy])
Pandas ExtensionArray for tz-naive or tz-aware datetime data.
DatetimeTZDtype([unit, tz])
An ExtensionDtype for timezone-aware datetime data.
Timedeltas#
NumPy can natively represent timedeltas. pandas provides Timedelta
for symmetry with Timestamp.
Timedelta([value, unit])
Represents a duration, the difference between two dates or times.
Properties#
Timedelta.asm8
Return a numpy timedelta64 array scalar view.
Timedelta.components
Return a components namedtuple-like.
Timedelta.days
Timedelta.delta
(DEPRECATED) Return the timedelta in nanoseconds (ns), for internal compatibility.
Timedelta.freq
(DEPRECATED) Freq property.
Timedelta.is_populated
(DEPRECATED) Is_populated property.
Timedelta.max
Timedelta.microseconds
Timedelta.min
Timedelta.nanoseconds
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
Timedelta.resolution
Timedelta.seconds
Timedelta.value
Timedelta.view
Array view compatibility.
Methods#
Timedelta.ceil(freq)
Return a new Timedelta ceiled to this resolution.
Timedelta.floor(freq)
Return a new Timedelta floored to this resolution.
Timedelta.isoformat
Format the Timedelta as ISO 8601 Duration.
Timedelta.round(freq)
Round the Timedelta to the specified resolution.
Timedelta.to_pytimedelta
Convert a pandas Timedelta object into a python datetime.timedelta object.
Timedelta.to_timedelta64
Return a numpy.timedelta64 object with 'ns' precision.
Timedelta.to_numpy
Convert the Timedelta to a NumPy timedelta64.
Timedelta.total_seconds
Total seconds in the duration.
A collection of Timedelta may be stored in a TimedeltaArray.
arrays.TimedeltaArray(values[, dtype, freq, ...])
Pandas ExtensionArray for timedelta data.
Periods#
pandas represents spans of times as Period objects.
Period#
Period([value, freq, ordinal, year, month, ...])
Represents a period of time.
Properties#
Period.day
Get day of the month that a Period falls on.
Period.dayofweek
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.day_of_week
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.dayofyear
Return the day of the year.
Period.day_of_year
Return the day of the year.
Period.days_in_month
Get the total number of days in the month that this period falls on.
Period.daysinmonth
Get the total number of days of the month that this period falls on.
Period.end_time
Get the Timestamp for the end of the period.
Period.freq
Period.freqstr
Return a string representation of the frequency.
Period.hour
Get the hour of the day component of the Period.
Period.is_leap_year
Return True if the period's year is in a leap year.
Period.minute
Get minute of the hour component of the Period.
Period.month
Return the month this Period falls on.
Period.ordinal
Period.quarter
Return the quarter this Period falls on.
Period.qyear
Fiscal year the Period lies in according to its starting-quarter.
Period.second
Get the second component of the Period.
Period.start_time
Get the Timestamp for the start of the period.
Period.week
Get the week of the year on the given Period.
Period.weekday
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.weekofyear
Get the week of the year on the given Period.
Period.year
Return the year this Period falls on.
Methods#
Period.asfreq
Convert Period to desired frequency, at the start or end of the interval.
Period.now
Return the period of now's date.
Period.strftime
Returns a formatted string representation of the Period.
Period.to_timestamp
Return the Timestamp representation of the Period.
A collection of Period may be stored in a arrays.PeriodArray.
Every period in a arrays.PeriodArray must have the same freq.
arrays.PeriodArray(values[, dtype, freq, copy])
Pandas ExtensionArray for storing Period data.
PeriodDtype([freq])
An ExtensionDtype for Period data.
Intervals#
Arbitrary intervals can be represented as Interval objects.
Interval
Immutable object implementing an Interval, a bounded slice-like interval.
Properties#
Interval.closed
String describing the inclusive side the intervals.
Interval.closed_left
Check if the interval is closed on the left side.
Interval.closed_right
Check if the interval is closed on the right side.
Interval.is_empty
Indicates if an interval is empty, meaning it contains no points.
Interval.left
Left bound for the interval.
Interval.length
Return the length of the Interval.
Interval.mid
Return the midpoint of the Interval.
Interval.open_left
Check if the interval is open on the left side.
Interval.open_right
Check if the interval is open on the right side.
Interval.overlaps
Check whether two Interval objects overlap.
Interval.right
Right bound for the interval.
A collection of intervals may be stored in an arrays.IntervalArray.
arrays.IntervalArray(data[, closed, dtype, ...])
Pandas array for interval data that are closed on the same side.
IntervalDtype([subtype, closed])
An ExtensionDtype for Interval data.
Nullable integer#
numpy.ndarray cannot natively represent integer-data with missing values.
pandas provides this through arrays.IntegerArray.
arrays.IntegerArray(values, mask[, copy])
Array of integer (optional missing) values.
Int8Dtype()
An ExtensionDtype for int8 integer data.
Int16Dtype()
An ExtensionDtype for int16 integer data.
Int32Dtype()
An ExtensionDtype for int32 integer data.
Int64Dtype()
An ExtensionDtype for int64 integer data.
UInt8Dtype()
An ExtensionDtype for uint8 integer data.
UInt16Dtype()
An ExtensionDtype for uint16 integer data.
UInt32Dtype()
An ExtensionDtype for uint32 integer data.
UInt64Dtype()
An ExtensionDtype for uint64 integer data.
Categoricals#
pandas defines a custom data type for representing data that can take only a
limited, fixed set of values. The dtype of a Categorical can be described by
a CategoricalDtype.
CategoricalDtype([categories, ordered])
Type for categorical data with the categories and orderedness.
CategoricalDtype.categories
An Index containing the unique categories allowed.
CategoricalDtype.ordered
Whether the categories have an ordered relationship.
Categorical data can be stored in a pandas.Categorical
Categorical(values[, categories, ordered, ...])
Represent a categorical variable in classic R / S-plus fashion.
The alternative Categorical.from_codes() constructor can be used when you
have the categories and integer codes already:
Categorical.from_codes(codes[, categories, ...])
Make a Categorical type from codes and categories or dtype.
The dtype information is available on the Categorical
Categorical.dtype
The CategoricalDtype for this instance.
Categorical.categories
The categories of this categorical.
Categorical.ordered
Whether the categories have an ordered relationship.
Categorical.codes
The category codes of this categorical.
np.asarray(categorical) works by implementing the array interface. Be aware, that this converts
the Categorical back to a NumPy array, so categories and order information is not preserved!
Categorical.__array__([dtype])
The numpy array interface.
A Categorical can be stored in a Series or DataFrame.
To create a Series of dtype category, use cat = s.astype(dtype) or
Series(..., dtype=dtype) where dtype is either
the string 'category'
an instance of CategoricalDtype.
If the Series is of dtype CategoricalDtype, Series.cat can be used to change the categorical
data. See Categorical accessor for more.
Sparse#
Data where a single value is repeated many times (e.g. 0 or NaN) may
be stored efficiently as a arrays.SparseArray.
arrays.SparseArray(data[, sparse_index, ...])
An ExtensionArray for storing sparse data.
SparseDtype([dtype, fill_value])
Dtype for data stored in SparseArray.
The Series.sparse accessor may be used to access sparse-specific attributes
and methods if the Series contains sparse values. See
Sparse accessor and the user guide for more.
Strings#
When working with text data, where each valid element is a string or missing,
we recommend using StringDtype (with the alias "string").
arrays.StringArray(values[, copy])
Extension array for string data.
arrays.ArrowStringArray(values)
Extension array for string data in a pyarrow.ChunkedArray.
StringDtype([storage])
Extension dtype for string data.
The Series.str accessor is available for Series backed by a arrays.StringArray.
See String handling for more.
Nullable Boolean#
The boolean dtype (with the alias "boolean") provides support for storing
boolean data (True, False) with missing values, which is not possible
with a bool numpy.ndarray.
arrays.BooleanArray(values, mask[, copy])
Array of boolean (True/False) data with missing values.
BooleanDtype()
Extension dtype for boolean data.
Utilities#
Constructors#
api.types.union_categoricals(to_union[, ...])
Combine list-like of Categorical-like, unioning categories.
api.types.infer_dtype
Return a string label of the type of a scalar or list-like of values.
api.types.pandas_dtype(dtype)
Convert input into a pandas only dtype object or a numpy dtype object.
Data type introspection#
api.types.is_bool_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a boolean dtype.
api.types.is_categorical_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Categorical dtype.
api.types.is_complex_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a complex dtype.
api.types.is_datetime64_any_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64 dtype.
api.types.is_datetime64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the datetime64 dtype.
api.types.is_datetime64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64[ns] dtype.
api.types.is_datetime64tz_dtype(arr_or_dtype)
Check whether an array-like or dtype is of a DatetimeTZDtype dtype.
api.types.is_extension_type(arr)
(DEPRECATED) Check whether an array-like is of a pandas extension class instance.
api.types.is_extension_array_dtype(arr_or_dtype)
Check if an object is a pandas extension array type.
api.types.is_float_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a float dtype.
api.types.is_int64_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the int64 dtype.
api.types.is_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an integer dtype.
api.types.is_interval_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Interval dtype.
api.types.is_numeric_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a numeric dtype.
api.types.is_object_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the object dtype.
api.types.is_period_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Period dtype.
api.types.is_signed_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a signed integer dtype.
api.types.is_string_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the string dtype.
api.types.is_timedelta64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the timedelta64 dtype.
api.types.is_timedelta64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the timedelta64[ns] dtype.
api.types.is_unsigned_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an unsigned integer dtype.
api.types.is_sparse(arr)
Check whether an array-like is a 1-D pandas sparse array.
Iterable introspection#
api.types.is_dict_like(obj)
Check if the object is dict-like.
api.types.is_file_like(obj)
Check if the object is a file-like object.
api.types.is_list_like
Check if the object is list-like.
api.types.is_named_tuple(obj)
Check if the object is a named tuple.
api.types.is_iterator
Check if the object is an iterator.
Scalar introspection#
api.types.is_bool
Return True if given object is boolean.
api.types.is_categorical(arr)
(DEPRECATED) Check whether an array-like is a Categorical instance.
api.types.is_complex
Return True if given object is complex.
api.types.is_float
Return True if given object is float.
api.types.is_hashable(obj)
Return True if hash(obj) will succeed, False otherwise.
api.types.is_integer
Return True if given object is integer.
api.types.is_interval
api.types.is_number(obj)
Check if the object is a number.
api.types.is_re(obj)
Check if the object is a regex pattern instance.
api.types.is_re_compilable(obj)
Check if the object can be compiled into a regex pattern instance.
api.types.is_scalar
Return True if given object is scalar.
| reference/arrays.html |
pandas.tseries.offsets.Milli.delta | pandas.tseries.offsets.Milli.delta | Milli.delta#
| reference/api/pandas.tseries.offsets.Milli.delta.html |
pandas.tseries.offsets.Nano.freqstr | `pandas.tseries.offsets.Nano.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
``` | Nano.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
| reference/api/pandas.tseries.offsets.Nano.freqstr.html |
pandas.tseries.offsets.BusinessMonthBegin.__call__ | `pandas.tseries.offsets.BusinessMonthBegin.__call__`
Call self as a function. | BusinessMonthBegin.__call__(*args, **kwargs)#
Call self as a function.
| reference/api/pandas.tseries.offsets.BusinessMonthBegin.__call__.html |
pandas.Timestamp.date | `pandas.Timestamp.date`
Return date object with same year, month and day. | Timestamp.date()#
Return date object with same year, month and day.
| reference/api/pandas.Timestamp.date.html |
pandas.RangeIndex.from_range | `pandas.RangeIndex.from_range`
Create RangeIndex from a range object. | classmethod RangeIndex.from_range(data, name=None, dtype=None)[source]#
Create RangeIndex from a range object.
Returns
RangeIndex
| reference/api/pandas.RangeIndex.from_range.html |
pandas.tseries.offsets.QuarterBegin.nanos | pandas.tseries.offsets.QuarterBegin.nanos | QuarterBegin.nanos#
| reference/api/pandas.tseries.offsets.QuarterBegin.nanos.html |
pandas.tseries.offsets.Minute.freqstr | `pandas.tseries.offsets.Minute.freqstr`
Return a string representing the frequency.
Examples
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
``` | Minute.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
| reference/api/pandas.tseries.offsets.Minute.freqstr.html |
pandas.tseries.offsets.CustomBusinessHour.is_month_end | `pandas.tseries.offsets.CustomBusinessHour.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | CustomBusinessHour.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.CustomBusinessHour.is_month_end.html |
pandas.PeriodIndex.quarter | `pandas.PeriodIndex.quarter`
The quarter of the date. | property PeriodIndex.quarter[source]#
The quarter of the date.
| reference/api/pandas.PeriodIndex.quarter.html |
pandas.tseries.offsets.BusinessMonthEnd.nanos | pandas.tseries.offsets.BusinessMonthEnd.nanos | BusinessMonthEnd.nanos#
| reference/api/pandas.tseries.offsets.BusinessMonthEnd.nanos.html |
pandas.Series.cat.add_categories | `pandas.Series.cat.add_categories`
Add new categories.
```
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
``` | Series.cat.add_categories(*args, **kwargs)[source]#
Add new categories.
new_categories will be included at the last/highest place in the
categories and will be unused directly after this call.
Parameters
new_categoriescategory or list-like of categoryThe new categories to be included.
inplacebool, default FalseWhether or not to add the categories inplace or return a copy of
this categorical with added categories.
Deprecated since version 1.3.0.
Returns
catCategorical or NoneCategorical with new categories added or None if inplace=True.
Raises
ValueErrorIf the new categories include old categories or do not validate as
categories
See also
rename_categoriesRename categories.
reorder_categoriesReorder categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
Examples
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
>>> c.add_categories(['d', 'a'])
['c', 'b', 'c']
Categories (4, object): ['b', 'c', 'd', 'a']
| reference/api/pandas.Series.cat.add_categories.html |
Group by: split-apply-combine | Group by: split-apply-combine | By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| user_guide/groupby.html |
pandas.read_orc | `pandas.read_orc`
Load an ORC object from the file path, returning a DataFrame. | pandas.read_orc(path, columns=None, **kwargs)[source]#
Load an ORC object from the file path, returning a DataFrame.
New in version 1.0.0.
Parameters
pathstr, path object, or file-like objectString, path object (implementing os.PathLike[str]), or file-like
object implementing a binary read() function. The string could be a URL.
Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be:
file://localhost/path/to/table.orc.
columnslist, default NoneIf not None, only these columns will be read from the file.
**kwargsAny additional kwargs are passed to pyarrow.
Returns
DataFrame
Notes
Before using this function you should read the user guide about ORC
and install optional dependencies.
| reference/api/pandas.read_orc.html |
pandas.tseries.offsets.DateOffset.apply | pandas.tseries.offsets.DateOffset.apply | DateOffset.apply()#
| reference/api/pandas.tseries.offsets.DateOffset.apply.html |
pandas.tseries.offsets.FY5253.rollforward | `pandas.tseries.offsets.FY5253.rollforward`
Roll provided date forward to next offset only if not on offset. | FY5253.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.FY5253.rollforward.html |
pandas.tseries.offsets.SemiMonthBegin.day_of_month | pandas.tseries.offsets.SemiMonthBegin.day_of_month | SemiMonthBegin.day_of_month#
| reference/api/pandas.tseries.offsets.SemiMonthBegin.day_of_month.html |
pandas.core.groupby.GroupBy.count | `pandas.core.groupby.GroupBy.count`
Compute count of group, excluding missing values. | final GroupBy.count()[source]#
Compute count of group, excluding missing values.
Returns
Series or DataFrameCount of values within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
| reference/api/pandas.core.groupby.GroupBy.count.html |
pandas.Series.dt.isocalendar | `pandas.Series.dt.isocalendar`
Calculate year, week, and day according to the ISO 8601 standard.
```
>>> ser = pd.to_datetime(pd.Series(["2010-01-01", pd.NaT]))
>>> ser.dt.isocalendar()
year week day
0 2009 53 5
1 <NA> <NA> <NA>
>>> ser.dt.isocalendar().week
0 53
1 <NA>
Name: week, dtype: UInt32
``` | Series.dt.isocalendar()[source]#
Calculate year, week, and day according to the ISO 8601 standard.
New in version 1.1.0.
Returns
DataFrameWith columns year, week and day.
See also
Timestamp.isocalendarFunction return a 3-tuple containing ISO year, week number, and weekday for the given Timestamp object.
datetime.date.isocalendarReturn a named tuple object with three components: year, week and weekday.
Examples
>>> ser = pd.to_datetime(pd.Series(["2010-01-01", pd.NaT]))
>>> ser.dt.isocalendar()
year week day
0 2009 53 5
1 <NA> <NA> <NA>
>>> ser.dt.isocalendar().week
0 53
1 <NA>
Name: week, dtype: UInt32
| reference/api/pandas.Series.dt.isocalendar.html |
pandas.MultiIndex.swaplevel | `pandas.MultiIndex.swaplevel`
Swap level i with level j.
```
>>> mi = pd.MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
... codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
>>> mi
MultiIndex([('a', 'bb'),
('a', 'aa'),
('b', 'bb'),
('b', 'aa')],
)
>>> mi.swaplevel(0, 1)
MultiIndex([('bb', 'a'),
('aa', 'a'),
('bb', 'b'),
('aa', 'b')],
)
``` | MultiIndex.swaplevel(i=- 2, j=- 1)[source]#
Swap level i with level j.
Calling this method does not change the ordering of the values.
Parameters
iint, str, default -2First level of index to be swapped. Can pass level name as string.
Type of parameters can be mixed.
jint, str, default -1Second level of index to be swapped. Can pass level name as string.
Type of parameters can be mixed.
Returns
MultiIndexA new MultiIndex.
See also
Series.swaplevelSwap levels i and j in a MultiIndex.
DataFrame.swaplevelSwap levels i and j in a MultiIndex on a particular axis.
Examples
>>> mi = pd.MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
... codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
>>> mi
MultiIndex([('a', 'bb'),
('a', 'aa'),
('b', 'bb'),
('b', 'aa')],
)
>>> mi.swaplevel(0, 1)
MultiIndex([('bb', 'a'),
('aa', 'a'),
('bb', 'b'),
('aa', 'b')],
)
| reference/api/pandas.MultiIndex.swaplevel.html |
pandas.core.groupby.GroupBy.median | `pandas.core.groupby.GroupBy.median`
Compute median of groups, excluding missing values. | final GroupBy.median(numeric_only=_NoDefault.no_default)[source]#
Compute median of groups, excluding missing values.
For multiple groupings, the result index will be a MultiIndex
Parameters
numeric_onlybool, default TrueInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
Returns
Series or DataFrameMedian of values within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
| reference/api/pandas.core.groupby.GroupBy.median.html |
pandas.api.extensions.ExtensionArray._concat_same_type | `pandas.api.extensions.ExtensionArray._concat_same_type`
Concatenate multiple array of this dtype. | classmethod ExtensionArray._concat_same_type(to_concat)[source]#
Concatenate multiple array of this dtype.
Parameters
to_concatsequence of this type
Returns
ExtensionArray
| reference/api/pandas.api.extensions.ExtensionArray._concat_same_type.html |
pandas.tseries.offsets.Week.is_year_start | `pandas.tseries.offsets.Week.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
``` | Week.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
| reference/api/pandas.tseries.offsets.Week.is_year_start.html |
pandas.Series.quantile | `pandas.Series.quantile`
Return value at the given quantile.
```
>>> s = pd.Series([1, 2, 3, 4])
>>> s.quantile(.5)
2.5
>>> s.quantile([.25, .5, .75])
0.25 1.75
0.50 2.50
0.75 3.25
dtype: float64
``` | Series.quantile(q=0.5, interpolation='linear')[source]#
Return value at the given quantile.
Parameters
qfloat or array-like, default 0.5 (50% quantile)The quantile(s) to compute, which can lie in range: 0 <= q <= 1.
interpolation{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points i and j:
linear: i + (j - i) * fraction, where fraction is the
fractional part of the index surrounded by i and j.
lower: i.
higher: j.
nearest: i or j whichever is nearest.
midpoint: (i + j) / 2.
Returns
float or SeriesIf q is an array, a Series will be returned where the
index is q and the values are the quantiles, otherwise
a float will be returned.
See also
core.window.Rolling.quantileCalculate the rolling quantile.
numpy.percentileReturns the q-th percentile(s) of the array elements.
Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s.quantile(.5)
2.5
>>> s.quantile([.25, .5, .75])
0.25 1.75
0.50 2.50
0.75 3.25
dtype: float64
| reference/api/pandas.Series.quantile.html |
pandas.tseries.offsets.BusinessHour.normalize | pandas.tseries.offsets.BusinessHour.normalize | BusinessHour.normalize#
| reference/api/pandas.tseries.offsets.BusinessHour.normalize.html |
pandas.api.types.is_float_dtype | `pandas.api.types.is_float_dtype`
Check whether the provided array or dtype is of a float dtype.
```
>>> is_float_dtype(str)
False
>>> is_float_dtype(int)
False
>>> is_float_dtype(float)
True
>>> is_float_dtype(np.array(['a', 'b']))
False
>>> is_float_dtype(pd.Series([1, 2]))
False
>>> is_float_dtype(pd.Index([1, 2.]))
True
``` | pandas.api.types.is_float_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of a float dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
booleanWhether or not the array or dtype is of a float dtype.
Examples
>>> is_float_dtype(str)
False
>>> is_float_dtype(int)
False
>>> is_float_dtype(float)
True
>>> is_float_dtype(np.array(['a', 'b']))
False
>>> is_float_dtype(pd.Series([1, 2]))
False
>>> is_float_dtype(pd.Index([1, 2.]))
True
| reference/api/pandas.api.types.is_float_dtype.html |
pandas.DataFrame.items | `pandas.DataFrame.items`
Iterate over (column name, Series) pairs.
```
>>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'],
... 'population': [1864, 22000, 80000]},
... index=['panda', 'polar', 'koala'])
>>> df
species population
panda bear 1864
polar bear 22000
koala marsupial 80000
>>> for label, content in df.items():
... print(f'label: {label}')
... print(f'content: {content}', sep='\n')
...
label: species
content:
panda bear
polar bear
koala marsupial
Name: species, dtype: object
label: population
content:
panda 1864
polar 22000
koala 80000
Name: population, dtype: int64
``` | DataFrame.items()[source]#
Iterate over (column name, Series) pairs.
Iterates over the DataFrame columns, returning a tuple with
the column name and the content as a Series.
Yields
labelobjectThe column names for the DataFrame being iterated over.
contentSeriesThe column entries belonging to each label, as a Series.
See also
DataFrame.iterrowsIterate over DataFrame rows as (index, Series) pairs.
DataFrame.itertuplesIterate over DataFrame rows as namedtuples of the values.
Examples
>>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'],
... 'population': [1864, 22000, 80000]},
... index=['panda', 'polar', 'koala'])
>>> df
species population
panda bear 1864
polar bear 22000
koala marsupial 80000
>>> for label, content in df.items():
... print(f'label: {label}')
... print(f'content: {content}', sep='\n')
...
label: species
content:
panda bear
polar bear
koala marsupial
Name: species, dtype: object
label: population
content:
panda 1864
polar 22000
koala 80000
Name: population, dtype: int64
| reference/api/pandas.DataFrame.items.html |
pandas.tseries.offsets.Micro.is_quarter_start | `pandas.tseries.offsets.Micro.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
``` | Micro.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
| reference/api/pandas.tseries.offsets.Micro.is_quarter_start.html |
pandas.tseries.offsets.CustomBusinessDay.is_month_end | `pandas.tseries.offsets.CustomBusinessDay.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | CustomBusinessDay.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.CustomBusinessDay.is_month_end.html |
pandas.tseries.offsets.BusinessDay.offset | `pandas.tseries.offsets.BusinessDay.offset`
Alias for self._offset. | BusinessDay.offset#
Alias for self._offset.
| reference/api/pandas.tseries.offsets.BusinessDay.offset.html |
pandas.tseries.offsets.BYearEnd.month | pandas.tseries.offsets.BYearEnd.month | BYearEnd.month#
| reference/api/pandas.tseries.offsets.BYearEnd.month.html |
pandas.Series.sparse.npoints | `pandas.Series.sparse.npoints`
The number of non- fill_value points.
Examples
```
>>> s = SparseArray([0, 0, 1, 1, 1], fill_value=0)
>>> s.npoints
3
``` | Series.sparse.npoints[source]#
The number of non- fill_value points.
Examples
>>> s = SparseArray([0, 0, 1, 1, 1], fill_value=0)
>>> s.npoints
3
| reference/api/pandas.Series.sparse.npoints.html |
pandas.Categorical.__array__ | `pandas.Categorical.__array__`
The numpy array interface.
A numpy array of either the specified dtype or,
if dtype==None (default), the same dtype as
categorical.categories.dtype. | Categorical.__array__(dtype=None)[source]#
The numpy array interface.
Returns
numpy.arrayA numpy array of either the specified dtype or,
if dtype==None (default), the same dtype as
categorical.categories.dtype.
| reference/api/pandas.Categorical.__array__.html |
pandas.core.window.rolling.Rolling.skew | `pandas.core.window.rolling.Rolling.skew`
Calculate the rolling unbiased skewness.
Include only float, int, boolean columns. | Rolling.skew(numeric_only=False, **kwargs)[source]#
Calculate the rolling unbiased skewness.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
scipy.stats.skewThird moment of a probability density.
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.skewAggregating skew for Series.
pandas.DataFrame.skewAggregating skew for DataFrame.
Notes
A minimum of three periods is required for the rolling calculation.
| reference/api/pandas.core.window.rolling.Rolling.skew.html |
pandas.notnull | `pandas.notnull`
Detect non-missing values for an array-like object.
```
>>> pd.notna('dog')
True
``` | pandas.notnull(obj)[source]#
Detect non-missing values for an array-like object.
This function takes a scalar or array-like object and indicates
whether values are valid (not missing, which is NaN in numeric
arrays, None or NaN in object arrays, NaT in datetimelike).
Parameters
objarray-like or object valueObject to check for not null or non-missing values.
Returns
bool or array-like of boolFor scalar input, returns a scalar boolean.
For array input, returns an array of boolean indicating whether each
corresponding element is valid.
See also
isnaBoolean inverse of pandas.notna.
Series.notnaDetect valid values in a Series.
DataFrame.notnaDetect valid values in a DataFrame.
Index.notnaDetect valid values in an Index.
Examples
Scalar arguments (including strings) result in a scalar boolean.
>>> pd.notna('dog')
True
>>> pd.notna(pd.NA)
False
>>> pd.notna(np.nan)
False
ndarrays result in an ndarray of booleans.
>>> array = np.array([[1, np.nan, 3], [4, 5, np.nan]])
>>> array
array([[ 1., nan, 3.],
[ 4., 5., nan]])
>>> pd.notna(array)
array([[ True, False, True],
[ True, True, False]])
For indexes, an ndarray of booleans is returned.
>>> index = pd.DatetimeIndex(["2017-07-05", "2017-07-06", None,
... "2017-07-08"])
>>> index
DatetimeIndex(['2017-07-05', '2017-07-06', 'NaT', '2017-07-08'],
dtype='datetime64[ns]', freq=None)
>>> pd.notna(index)
array([ True, True, False, True])
For Series and DataFrame, the same type is returned, containing booleans.
>>> df = pd.DataFrame([['ant', 'bee', 'cat'], ['dog', None, 'fly']])
>>> df
0 1 2
0 ant bee cat
1 dog None fly
>>> pd.notna(df)
0 1 2
0 True True True
1 True False True
>>> pd.notna(df[1])
0 True
1 False
Name: 1, dtype: bool
| reference/api/pandas.notnull.html |
pandas.DataFrame.dot | `pandas.DataFrame.dot`
Compute the matrix multiplication between the DataFrame and other.
```
>>> df = pd.DataFrame([[0, 1, -2, -1], [1, 1, 1, 1]])
>>> s = pd.Series([1, 1, 2, 1])
>>> df.dot(s)
0 -4
1 5
dtype: int64
``` | DataFrame.dot(other)[source]#
Compute the matrix multiplication between the DataFrame and other.
This method computes the matrix product between the DataFrame and the
values of an other Series, DataFrame or a numpy array.
It can also be called using self @ other in Python >= 3.5.
Parameters
otherSeries, DataFrame or array-likeThe other object to compute the matrix product with.
Returns
Series or DataFrameIf other is a Series, return the matrix product between self and
other as a Series. If other is a DataFrame or a numpy.array, return
the matrix product of self and other in a DataFrame of a np.array.
See also
Series.dotSimilar method for Series.
Notes
The dimensions of DataFrame and other must be compatible in order to
compute the matrix multiplication. In addition, the column names of
DataFrame and the index of other must contain the same values, as they
will be aligned prior to the multiplication.
The dot method for Series computes the inner product, instead of the
matrix product here.
Examples
Here we multiply a DataFrame with a Series.
>>> df = pd.DataFrame([[0, 1, -2, -1], [1, 1, 1, 1]])
>>> s = pd.Series([1, 1, 2, 1])
>>> df.dot(s)
0 -4
1 5
dtype: int64
Here we multiply a DataFrame with another DataFrame.
>>> other = pd.DataFrame([[0, 1], [1, 2], [-1, -1], [2, 0]])
>>> df.dot(other)
0 1
0 1 4
1 2 2
Note that the dot method give the same result as @
>>> df @ other
0 1
0 1 4
1 2 2
The dot method works also if other is an np.array.
>>> arr = np.array([[0, 1], [1, 2], [-1, -1], [2, 0]])
>>> df.dot(arr)
0 1
0 1 4
1 2 2
Note how shuffling of the objects does not change the result.
>>> s2 = s.reindex([1, 0, 2, 3])
>>> df.dot(s2)
0 -4
1 5
dtype: int64
| reference/api/pandas.DataFrame.dot.html |
pandas.io.formats.style.Styler.format_index | `pandas.io.formats.style.Styler.format_index`
Format the text display value of index labels or column headers.
```
>>> df = pd.DataFrame([[1, 2, 3]], columns=[2.0, np.nan, 4.0])
>>> df.style.format_index(axis=1, na_rep='MISS', precision=3)
2.000 MISS 4.000
0 1 2 3
``` | Styler.format_index(formatter=None, axis=0, level=None, na_rep=None, precision=None, decimal='.', thousands=None, escape=None, hyperlinks=None)[source]#
Format the text display value of index labels or column headers.
New in version 1.4.0.
Parameters
formatterstr, callable, dict or NoneObject to define how values are displayed. See notes.
axis{0, “index”, 1, “columns”}Whether to apply the formatter to the index or column headers.
levelint, str, listThe level(s) over which to apply the generic formatter.
na_repstr, optionalRepresentation for missing values.
If na_rep is None, no special formatting is applied.
precisionint, optionalFloating point precision to use for display purposes, if not determined by
the specified formatter.
decimalstr, default “.”Character used as decimal separator for floats, complex and integers.
thousandsstr, optional, default NoneCharacter used as thousands separator for floats, complex and integers.
escapestr, optionalUse ‘html’ to replace the characters &, <, >, ', and "
in cell display string with HTML-safe sequences.
Use ‘latex’ to replace the characters &, %, $, #, _,
{, }, ~, ^, and \ in the cell display string with
LaTeX-safe sequences.
Escaping is done before formatter.
hyperlinks{“html”, “latex”}, optionalConvert string patterns containing https://, http://, ftp:// or www. to
HTML <a> tags as clickable URL hyperlinks if “html”, or LaTeX href
commands if “latex”.
Returns
selfStyler
See also
Styler.formatFormat the text display value of data cells.
Notes
This method assigns a formatting function, formatter, to each level label
in the DataFrame’s index or column headers. If formatter is None,
then the default formatter is used.
If a callable then that function should take a label value as input and return
a displayable representation, such as a string. If formatter is
given as a string this is assumed to be a valid Python format specification
and is wrapped to a callable as string.format(x). If a dict is given,
keys should correspond to MultiIndex level numbers or names, and values should
be string or callable, as above.
The default formatter currently expresses floats and complex numbers with the
pandas display precision unless using the precision argument here. The
default formatter does not adjust the representation of missing values unless
the na_rep argument is used.
The level argument defines which levels of a MultiIndex to apply the
method to. If the formatter argument is given in dict form but does
not include all levels within the level argument then these unspecified levels
will have the default formatter applied. Any levels in the formatter dict
specifically excluded from the level argument will be ignored.
When using a formatter string the dtypes must be compatible, otherwise a
ValueError will be raised.
Warning
Styler.format_index is ignored when using the output format
Styler.to_excel, since Excel and Python have inherrently different
formatting structures.
However, it is possible to use the number-format pseudo CSS attribute
to force Excel permissible formatting. See documentation for Styler.format.
Examples
Using na_rep and precision with the default formatter
>>> df = pd.DataFrame([[1, 2, 3]], columns=[2.0, np.nan, 4.0])
>>> df.style.format_index(axis=1, na_rep='MISS', precision=3)
2.000 MISS 4.000
0 1 2 3
Using a formatter specification on consistent dtypes in a level
>>> df.style.format_index('{:.2f}', axis=1, na_rep='MISS')
2.00 MISS 4.00
0 1 2 3
Using the default formatter for unspecified levels
>>> df = pd.DataFrame([[1, 2, 3]],
... columns=pd.MultiIndex.from_arrays([["a", "a", "b"],[2, np.nan, 4]]))
>>> df.style.format_index({0: lambda v: upper(v)}, axis=1, precision=1)
...
A B
2.0 nan 4.0
0 1 2 3
Using a callable formatter function.
>>> func = lambda s: 'STRING' if isinstance(s, str) else 'FLOAT'
>>> df.style.format_index(func, axis=1, na_rep='MISS')
...
STRING STRING
FLOAT MISS FLOAT
0 1 2 3
Using a formatter with HTML escape and na_rep.
>>> df = pd.DataFrame([[1, 2, 3]], columns=['"A"', 'A&B', None])
>>> s = df.style.format_index('$ {0}', axis=1, escape="html", na_rep="NA")
...
<th .. >$ "A"</th>
<th .. >$ A&B</th>
<th .. >NA</td>
...
Using a formatter with LaTeX escape.
>>> df = pd.DataFrame([[1, 2, 3]], columns=["123", "~", "$%#"])
>>> df.style.format_index("\\textbf{{{}}}", escape="latex", axis=1).to_latex()
...
\begin{tabular}{lrrr}
{} & {\textbf{123}} & {\textbf{\textasciitilde }} & {\textbf{\$\%\#}} \\
0 & 1 & 2 & 3 \\
\end{tabular}
| reference/api/pandas.io.formats.style.Styler.format_index.html |
pandas.tseries.offsets.Milli.apply_index | Milli.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.Milli.apply_index.html | null |
pandas.Series.skew | `pandas.Series.skew`
Return unbiased skew over requested axis.
Normalized by N-1. | Series.skew(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return unbiased skew over requested axis.
Normalized by N-1.
Parameters
axis{index (0)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
scalar or Series (if level specified)
| reference/api/pandas.Series.skew.html |
pandas.Period.day_of_year | `pandas.Period.day_of_year`
Return the day of the year.
This attribute returns the day of the year on which the particular
date occurs. The return value ranges between 1 to 365 for regular
years and 1 to 366 for leap years.
```
>>> period = pd.Period("2015-10-23", freq='H')
>>> period.day_of_year
296
>>> period = pd.Period("2012-12-31", freq='D')
>>> period.day_of_year
366
>>> period = pd.Period("2013-01-01", freq='D')
>>> period.day_of_year
1
``` | Period.day_of_year#
Return the day of the year.
This attribute returns the day of the year on which the particular
date occurs. The return value ranges between 1 to 365 for regular
years and 1 to 366 for leap years.
Returns
intThe day of year.
See also
Period.dayReturn the day of the month.
Period.day_of_weekReturn the day of week.
PeriodIndex.day_of_yearReturn the day of year of all indexes.
Examples
>>> period = pd.Period("2015-10-23", freq='H')
>>> period.day_of_year
296
>>> period = pd.Period("2012-12-31", freq='D')
>>> period.day_of_year
366
>>> period = pd.Period("2013-01-01", freq='D')
>>> period.day_of_year
1
| reference/api/pandas.Period.day_of_year.html |
pandas.DatetimeIndex.month | `pandas.DatetimeIndex.month`
The month as January=1, December=12.
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="M")
... )
>>> datetime_series
0 2000-01-31
1 2000-02-29
2 2000-03-31
dtype: datetime64[ns]
>>> datetime_series.dt.month
0 1
1 2
2 3
dtype: int64
``` | property DatetimeIndex.month[source]#
The month as January=1, December=12.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="M")
... )
>>> datetime_series
0 2000-01-31
1 2000-02-29
2 2000-03-31
dtype: datetime64[ns]
>>> datetime_series.dt.month
0 1
1 2
2 3
dtype: int64
| reference/api/pandas.DatetimeIndex.month.html |
pandas.Index.set_value | `pandas.Index.set_value`
Fast lookup of value from 1-dimensional ndarray.
Deprecated since version 1.0. | final Index.set_value(arr, key, value)[source]#
Fast lookup of value from 1-dimensional ndarray.
Deprecated since version 1.0.
Notes
Only use this if you know what you’re doing.
| reference/api/pandas.Index.set_value.html |
pandas.ExcelWriter.cur_sheet | `pandas.ExcelWriter.cur_sheet`
Current sheet for writing.
Deprecated since version 1.5.0. | property ExcelWriter.cur_sheet[source]#
Current sheet for writing.
Deprecated since version 1.5.0.
| reference/api/pandas.ExcelWriter.cur_sheet.html |
pandas.api.extensions.register_dataframe_accessor | `pandas.api.extensions.register_dataframe_accessor`
Register a custom accessor on DataFrame objects.
```
>>> pd.Series(['a', 'b']).dt
Traceback (most recent call last):
...
AttributeError: Can only use .dt accessor with datetimelike values
``` | pandas.api.extensions.register_dataframe_accessor(name)[source]#
Register a custom accessor on DataFrame objects.
Parameters
namestrName under which the accessor should be registered. A warning is issued
if this name conflicts with a preexisting attribute.
Returns
callableA class decorator.
See also
register_dataframe_accessorRegister a custom accessor on DataFrame objects.
register_series_accessorRegister a custom accessor on Series objects.
register_index_accessorRegister a custom accessor on Index objects.
Notes
When accessed, your accessor will be initialized with the pandas object
the user is interacting with. So the signature must be
def __init__(self, pandas_object): # noqa: E999
...
For consistency with pandas methods, you should raise an AttributeError
if the data passed to your accessor has an incorrect dtype.
>>> pd.Series(['a', 'b']).dt
Traceback (most recent call last):
...
AttributeError: Can only use .dt accessor with datetimelike values
Examples
In your library code:
import pandas as pd
@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor:
def __init__(self, pandas_obj):
self._obj = pandas_obj
@property
def center(self):
# return the geographic center point of this DataFrame
lat = self._obj.latitude
lon = self._obj.longitude
return (float(lon.mean()), float(lat.mean()))
def plot(self):
# plot this array's data on a map, e.g., using Cartopy
pass
Back in an interactive IPython session:
In [1]: ds = pd.DataFrame({"longitude": np.linspace(0, 10),
...: "latitude": np.linspace(0, 20)})
In [2]: ds.geo.center
Out[2]: (5.0, 10.0)
In [3]: ds.geo.plot() # plots data on a map
| reference/api/pandas.api.extensions.register_dataframe_accessor.html |
pandas.tseries.offsets.CustomBusinessMonthEnd.is_month_start | `pandas.tseries.offsets.CustomBusinessMonthEnd.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | CustomBusinessMonthEnd.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.is_month_start.html |
pandas.Timestamp.isoformat | `pandas.Timestamp.isoformat`
Return the time formatted according to ISO 8610.
```
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
>>> ts.isoformat()
'2020-03-14T15:32:52.192548651'
>>> ts.isoformat(timespec='microseconds')
'2020-03-14T15:32:52.192548'
``` | Timestamp.isoformat()#
Return the time formatted according to ISO 8610.
The full format looks like ‘YYYY-MM-DD HH:MM:SS.mmmmmmnnn’.
By default, the fractional part is omitted if self.microsecond == 0
and self.nanosecond == 0.
If self.tzinfo is not None, the UTC offset is also attached, giving
giving a full format of ‘YYYY-MM-DD HH:MM:SS.mmmmmmnnn+HH:MM’.
Parameters
sepstr, default ‘T’String used as the separator between the date and time.
timespecstr, default ‘auto’Specifies the number of additional terms of the time to include.
The valid values are ‘auto’, ‘hours’, ‘minutes’, ‘seconds’,
‘milliseconds’, ‘microseconds’, and ‘nanoseconds’.
Returns
str
Examples
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
>>> ts.isoformat()
'2020-03-14T15:32:52.192548651'
>>> ts.isoformat(timespec='microseconds')
'2020-03-14T15:32:52.192548'
| reference/api/pandas.Timestamp.isoformat.html |
pandas.errors.NumbaUtilError | `pandas.errors.NumbaUtilError`
Error raised for unsupported Numba engine routines. | exception pandas.errors.NumbaUtilError[source]#
Error raised for unsupported Numba engine routines.
| reference/api/pandas.errors.NumbaUtilError.html |
pandas.tseries.offsets.BYearBegin.is_month_start | `pandas.tseries.offsets.BYearBegin.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | BYearBegin.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.BYearBegin.is_month_start.html |
pandas.tseries.offsets.CustomBusinessHour.start | pandas.tseries.offsets.CustomBusinessHour.start | CustomBusinessHour.start#
| reference/api/pandas.tseries.offsets.CustomBusinessHour.start.html |
pandas.io.formats.style.Styler.highlight_max | `pandas.io.formats.style.Styler.highlight_max`
Highlight the maximum with a style. | Styler.highlight_max(subset=None, color='yellow', axis=0, props=None)[source]#
Highlight the maximum with a style.
Parameters
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
colorstr, default ‘yellow’Background color to use for highlighting.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Apply to each column (axis=0 or 'index'), to each row
(axis=1 or 'columns'), or to the entire DataFrame at once
with axis=None.
propsstr, default NoneCSS properties to use for highlighting. If props is given, color
is not used.
New in version 1.3.0.
Returns
selfStyler
See also
Styler.highlight_nullHighlight missing values with a style.
Styler.highlight_minHighlight the minimum with a style.
Styler.highlight_betweenHighlight a defined range with a style.
Styler.highlight_quantileHighlight values defined by a quantile with a style.
| reference/api/pandas.io.formats.style.Styler.highlight_max.html |
pandas.read_stata | `pandas.read_stata`
Read Stata file into DataFrame.
```
>>> df = pd.read_stata('animals.dta')
``` | pandas.read_stata(filepath_or_buffer, *, convert_dates=True, convert_categoricals=True, index_col=None, convert_missing=False, preserve_dtypes=True, columns=None, order_categoricals=True, chunksize=None, iterator=False, compression='infer', storage_options=None)[source]#
Read Stata file into DataFrame.
Parameters
filepath_or_bufferstr, path object or file-like objectAny valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.dta.
If you want to pass in a path object, pandas accepts any os.PathLike.
By file-like object, we refer to objects with a read() method,
such as a file handle (e.g. via builtin open function)
or StringIO.
convert_datesbool, default TrueConvert date variables to DataFrame time values.
convert_categoricalsbool, default TrueRead value labels and convert columns to Categorical/Factor variables.
index_colstr, optionalColumn to set as index.
convert_missingbool, default FalseFlag indicating whether to convert missing values to their Stata
representations. If False, missing values are replaced with nan.
If True, columns containing missing values are returned with
object data types and missing values are represented by
StataMissingValue objects.
preserve_dtypesbool, default TruePreserve Stata datatypes. If False, numeric data are upcast to pandas
default types for foreign data (float64 or int64).
columnslist or NoneColumns to retain. Columns will be returned in the given order. None
returns all columns.
order_categoricalsbool, default TrueFlag indicating whether converted categorical data are ordered.
chunksizeint, default NoneReturn StataReader object for iterations, returns chunks with
given number of lines.
iteratorbool, default FalseReturn StataReader object.
compressionstr or dict, default ‘infer’For on-the-fly decompression of on-disk data. If ‘infer’ and ‘filepath_or_buffer’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in.
Set to None for no decompression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdDecompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for Zstandard decompression using a
custom compression dictionary:
compression={'method': 'zstd', 'dict_data': my_compression_dict}.
New in version 1.5.0: Added support for .tar files.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
Returns
DataFrame or StataReader
See also
io.stata.StataReaderLow-level reader for Stata data files.
DataFrame.to_stataExport Stata data files.
Notes
Categorical variables read through an iterator may not have the same
categories and dtype. This occurs when a variable stored in a DTA
file is associated to an incomplete set of value labels that only
label a strict subset of the values.
Examples
Creating a dummy stata for this example
>>> df = pd.DataFrame({‘animal’: [‘falcon’, ‘parrot’, ‘falcon’,
… ‘parrot’],
… ‘speed’: [350, 18, 361, 15]}) # doctest: +SKIP
>>> df.to_stata(‘animals.dta’) # doctest: +SKIP
Read a Stata dta file:
>>> df = pd.read_stata('animals.dta')
Read a Stata dta file in 10,000 line chunks:
>>> values = np.random.randint(0, 10, size=(20_000, 1), dtype=”uint8”) # doctest: +SKIP
>>> df = pd.DataFrame(values, columns=[“i”]) # doctest: +SKIP
>>> df.to_stata(‘filename.dta’) # doctest: +SKIP
>>> itr = pd.read_stata('filename.dta', chunksize=10000)
>>> for chunk in itr:
... # Operate on a single chunk, e.g., chunk.mean()
... pass
| reference/api/pandas.read_stata.html |
pandas.Period.week | `pandas.Period.week`
Get the week of the year on the given Period.
```
>>> p = pd.Period("2018-03-11", "H")
>>> p.week
10
``` | Period.week#
Get the week of the year on the given Period.
Returns
int
See also
Period.dayofweekGet the day component of the Period.
Period.weekdayGet the day component of the Period.
Examples
>>> p = pd.Period("2018-03-11", "H")
>>> p.week
10
>>> p = pd.Period("2018-02-01", "D")
>>> p.week
5
>>> p = pd.Period("2018-01-06", "D")
>>> p.week
1
| reference/api/pandas.Period.week.html |
pandas.Series.str.istitle | `pandas.Series.str.istitle`
Check whether all characters in each string are titlecase.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
``` | Series.str.istitle()[source]#
Check whether all characters in each string are titlecase.
This is equivalent to running the Python string method
str.istitle() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
| reference/api/pandas.Series.str.istitle.html |
pandas.core.resample.Resampler.apply | `pandas.core.resample.Resampler.apply`
Aggregate using one or more operations over the specified axis.
```
>>> s = pd.Series([1, 2, 3, 4, 5],
... index=pd.date_range('20130101', periods=5, freq='s'))
>>> s
2013-01-01 00:00:00 1
2013-01-01 00:00:01 2
2013-01-01 00:00:02 3
2013-01-01 00:00:03 4
2013-01-01 00:00:04 5
Freq: S, dtype: int64
``` | Resampler.apply(func=None, *args, **kwargs)[source]#
Aggregate using one or more operations over the specified axis.
Parameters
funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either
work when passed a DataFrame or when passed to DataFrame.apply.
Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, 'mean']
dict of axis labels -> functions, function names or list of such.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
scalar, Series or DataFrameThe return can be:
scalar : when Series.agg is called with single function
Series : when DataFrame.agg is called with a single function
DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
DataFrame.groupby.aggregateAggregate using callable, string, dict, or list of string/callables.
DataFrame.resample.transformTransforms the Series on each group based on the given function.
DataFrame.aggregateAggregate using one or more operations over the specified axis.
Notes
agg is an alias for aggregate. Use the alias.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> s = pd.Series([1, 2, 3, 4, 5],
... index=pd.date_range('20130101', periods=5, freq='s'))
>>> s
2013-01-01 00:00:00 1
2013-01-01 00:00:01 2
2013-01-01 00:00:02 3
2013-01-01 00:00:03 4
2013-01-01 00:00:04 5
Freq: S, dtype: int64
>>> r = s.resample('2s')
>>> r.agg(np.sum)
2013-01-01 00:00:00 3
2013-01-01 00:00:02 7
2013-01-01 00:00:04 5
Freq: 2S, dtype: int64
>>> r.agg(['sum', 'mean', 'max'])
sum mean max
2013-01-01 00:00:00 3 1.5 2
2013-01-01 00:00:02 7 3.5 4
2013-01-01 00:00:04 5 5.0 5
>>> r.agg({'result': lambda x: x.mean() / x.std(),
... 'total': np.sum})
result total
2013-01-01 00:00:00 2.121320 3
2013-01-01 00:00:02 4.949747 7
2013-01-01 00:00:04 NaN 5
>>> r.agg(average="mean", total="sum")
average total
2013-01-01 00:00:00 1.5 3
2013-01-01 00:00:02 3.5 7
2013-01-01 00:00:04 5.0 5
| reference/api/pandas.core.resample.Resampler.apply.html |
pandas.tseries.offsets.BYearBegin.nanos | pandas.tseries.offsets.BYearBegin.nanos | BYearBegin.nanos#
| reference/api/pandas.tseries.offsets.BYearBegin.nanos.html |
pandas.api.types.is_interval | pandas.api.types.is_interval | pandas.api.types.is_interval()#
| reference/api/pandas.api.types.is_interval.html |
pandas.Series.cat.as_unordered | `pandas.Series.cat.as_unordered`
Set the Categorical to be unordered. | Series.cat.as_unordered(*args, **kwargs)[source]#
Set the Categorical to be unordered.
Parameters
inplacebool, default FalseWhether or not to set the ordered attribute in-place or return
a copy of this categorical with ordered set to False.
Deprecated since version 1.5.0.
Returns
Categorical or NoneUnordered Categorical or None if inplace=True.
| reference/api/pandas.Series.cat.as_unordered.html |
GroupBy | GroupBy | GroupBy objects are returned by groupby calls: pandas.DataFrame.groupby(), pandas.Series.groupby(), etc.
Indexing, iteration#
GroupBy.__iter__()
Groupby iterator.
GroupBy.groups
Dict {group name -> group labels}.
GroupBy.indices
Dict {group name -> group indices}.
GroupBy.get_group(name[, obj])
Construct DataFrame from group with provided name.
Grouper(*args, **kwargs)
A Grouper allows the user to specify a groupby instruction for an object.
Function application#
GroupBy.apply(func, *args, **kwargs)
Apply function func group-wise and combine the results together.
GroupBy.agg(func, *args, **kwargs)
SeriesGroupBy.aggregate([func, engine, ...])
Aggregate using one or more operations over the specified axis.
DataFrameGroupBy.aggregate([func, engine, ...])
Aggregate using one or more operations over the specified axis.
SeriesGroupBy.transform(func, *args[, ...])
Call function producing a same-indexed Series on each group.
DataFrameGroupBy.transform(func, *args[, ...])
Call function producing a same-indexed DataFrame on each group.
GroupBy.pipe(func, *args, **kwargs)
Apply a func with arguments to this GroupBy object and return its result.
Computations / descriptive stats#
GroupBy.all([skipna])
Return True if all values in the group are truthful, else False.
GroupBy.any([skipna])
Return True if any value in the group is truthful, else False.
GroupBy.bfill([limit])
Backward fill the values.
GroupBy.backfill([limit])
(DEPRECATED) Backward fill the values.
GroupBy.count()
Compute count of group, excluding missing values.
GroupBy.cumcount([ascending])
Number each item in each group from 0 to the length of that group - 1.
GroupBy.cummax([axis, numeric_only])
Cumulative max for each group.
GroupBy.cummin([axis, numeric_only])
Cumulative min for each group.
GroupBy.cumprod([axis])
Cumulative product for each group.
GroupBy.cumsum([axis])
Cumulative sum for each group.
GroupBy.ffill([limit])
Forward fill the values.
GroupBy.first([numeric_only, min_count])
Compute the first non-null entry of each column.
GroupBy.head([n])
Return first n rows of each group.
GroupBy.last([numeric_only, min_count])
Compute the last non-null entry of each column.
GroupBy.max([numeric_only, min_count, ...])
Compute max of group values.
GroupBy.mean([numeric_only, engine, ...])
Compute mean of groups, excluding missing values.
GroupBy.median([numeric_only])
Compute median of groups, excluding missing values.
GroupBy.min([numeric_only, min_count, ...])
Compute min of group values.
GroupBy.ngroup([ascending])
Number each group from 0 to the number of groups - 1.
GroupBy.nth
Take the nth row from each group if n is an int, otherwise a subset of rows.
GroupBy.ohlc()
Compute open, high, low and close values of a group, excluding missing values.
GroupBy.pad([limit])
(DEPRECATED) Forward fill the values.
GroupBy.prod([numeric_only, min_count])
Compute prod of group values.
GroupBy.rank([method, ascending, na_option, ...])
Provide the rank of values within each group.
GroupBy.pct_change([periods, fill_method, ...])
Calculate pct_change of each value to previous entry in group.
GroupBy.size()
Compute group sizes.
GroupBy.sem([ddof, numeric_only])
Compute standard error of the mean of groups, excluding missing values.
GroupBy.std([ddof, engine, engine_kwargs, ...])
Compute standard deviation of groups, excluding missing values.
GroupBy.sum([numeric_only, min_count, ...])
Compute sum of group values.
GroupBy.var([ddof, engine, engine_kwargs, ...])
Compute variance of groups, excluding missing values.
GroupBy.tail([n])
Return last n rows of each group.
The following methods are available in both SeriesGroupBy and
DataFrameGroupBy objects, but may differ slightly, usually in that
the DataFrameGroupBy version usually permits the specification of an
axis argument, and often an argument indicating whether to restrict
application to columns of a specific data type.
DataFrameGroupBy.all([skipna])
Return True if all values in the group are truthful, else False.
DataFrameGroupBy.any([skipna])
Return True if any value in the group is truthful, else False.
DataFrameGroupBy.backfill([limit])
(DEPRECATED) Backward fill the values.
DataFrameGroupBy.bfill([limit])
Backward fill the values.
DataFrameGroupBy.corr
Compute pairwise correlation of columns, excluding NA/null values.
DataFrameGroupBy.count()
Compute count of group, excluding missing values.
DataFrameGroupBy.cov
Compute pairwise covariance of columns, excluding NA/null values.
DataFrameGroupBy.cumcount([ascending])
Number each item in each group from 0 to the length of that group - 1.
DataFrameGroupBy.cummax([axis, numeric_only])
Cumulative max for each group.
DataFrameGroupBy.cummin([axis, numeric_only])
Cumulative min for each group.
DataFrameGroupBy.cumprod([axis])
Cumulative product for each group.
DataFrameGroupBy.cumsum([axis])
Cumulative sum for each group.
DataFrameGroupBy.describe(**kwargs)
Generate descriptive statistics.
DataFrameGroupBy.diff([periods, axis])
First discrete difference of element.
DataFrameGroupBy.ffill([limit])
Forward fill the values.
DataFrameGroupBy.fillna
Fill NA/NaN values using the specified method.
DataFrameGroupBy.filter(func[, dropna])
Return a copy of a DataFrame excluding filtered elements.
DataFrameGroupBy.hist
Make a histogram of the DataFrame's columns.
DataFrameGroupBy.idxmax([axis, skipna, ...])
Return index of first occurrence of maximum over requested axis.
DataFrameGroupBy.idxmin([axis, skipna, ...])
Return index of first occurrence of minimum over requested axis.
DataFrameGroupBy.mad
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
DataFrameGroupBy.nunique([dropna])
Return DataFrame with counts of unique elements in each position.
DataFrameGroupBy.pad([limit])
(DEPRECATED) Forward fill the values.
DataFrameGroupBy.pct_change([periods, ...])
Calculate pct_change of each value to previous entry in group.
DataFrameGroupBy.plot
Class implementing the .plot attribute for groupby objects.
DataFrameGroupBy.quantile([q, ...])
Return group values at the given quantile, a la numpy.percentile.
DataFrameGroupBy.rank([method, ascending, ...])
Provide the rank of values within each group.
DataFrameGroupBy.resample(rule, *args, **kwargs)
Provide resampling when using a TimeGrouper.
DataFrameGroupBy.sample([n, frac, replace, ...])
Return a random sample of items from each group.
DataFrameGroupBy.shift([periods, freq, ...])
Shift each group by periods observations.
DataFrameGroupBy.size()
Compute group sizes.
DataFrameGroupBy.skew
Return unbiased skew over requested axis.
DataFrameGroupBy.take
Return the elements in the given positional indices along an axis.
DataFrameGroupBy.tshift
(DEPRECATED) Shift the time index, using the index's frequency if available.
DataFrameGroupBy.value_counts([subset, ...])
Return a Series or DataFrame containing counts of unique rows.
The following methods are available only for SeriesGroupBy objects.
SeriesGroupBy.hist
Draw histogram of the input series using matplotlib.
SeriesGroupBy.nlargest([n, keep])
Return the largest n elements.
SeriesGroupBy.nsmallest([n, keep])
Return the smallest n elements.
SeriesGroupBy.unique
Return unique values of Series object.
SeriesGroupBy.is_monotonic_increasing
Return boolean if values in the object are monotonically increasing.
SeriesGroupBy.is_monotonic_decreasing
Return boolean if values in the object are monotonically decreasing.
The following methods are available only for DataFrameGroupBy objects.
DataFrameGroupBy.corrwith
Compute pairwise correlation.
DataFrameGroupBy.boxplot([subplots, column, ...])
Make box plots from DataFrameGroupBy data.
| reference/groupby.html |
pandas.read_sql_query | `pandas.read_sql_query`
Read SQL query into a DataFrame. | pandas.read_sql_query(sql, con, index_col=None, coerce_float=True, params=None, parse_dates=None, chunksize=None, dtype=None)[source]#
Read SQL query into a DataFrame.
Returns a DataFrame corresponding to the result set of the query
string. Optionally provide an index_col parameter to use one of the
columns as the index, otherwise default integer index will be used.
Parameters
sqlstr SQL query or SQLAlchemy Selectable (select or text object)SQL query to be executed.
conSQLAlchemy connectable, str, or sqlite3 connectionUsing SQLAlchemy makes it possible to use any DB supported by that
library. If a DBAPI2 object, only sqlite3 is supported.
index_colstr or list of str, optional, default: NoneColumn(s) to set as index(MultiIndex).
coerce_floatbool, default TrueAttempts to convert values of non-string, non-numeric objects (like
decimal.Decimal) to floating point. Useful for SQL result sets.
paramslist, tuple or dict, optional, default: NoneList of parameters to pass to execute method. The syntax used
to pass parameters is database driver dependent. Check your
database driver documentation for which of the five syntax styles,
described in PEP 249’s paramstyle, is supported.
Eg. for psycopg2, uses %(name)s so use params={‘name’ : ‘value’}.
parse_dateslist or dict, default: None
List of column names to parse as dates.
Dict of {column_name: format string} where format string is
strftime compatible in case of parsing string times, or is one of
(D, s, ns, ms, us) in case of parsing integer timestamps.
Dict of {column_name: arg dict}, where the arg dict corresponds
to the keyword arguments of pandas.to_datetime()
Especially useful with databases without native Datetime support,
such as SQLite.
chunksizeint, default NoneIf specified, return an iterator where chunksize is the number of
rows to include in each chunk.
dtypeType name or dict of columnsData type for data or columns. E.g. np.float64 or
{‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’}.
New in version 1.3.0.
Returns
DataFrame or Iterator[DataFrame]
See also
read_sql_tableRead SQL database table into a DataFrame.
read_sqlRead SQL query or database table into a DataFrame.
Notes
Any datetime values with time zone information parsed via the parse_dates
parameter will be converted to UTC.
| reference/api/pandas.read_sql_query.html |
pandas.core.groupby.DataFrameGroupBy.cumprod | `pandas.core.groupby.DataFrameGroupBy.cumprod`
Cumulative product for each group. | DataFrameGroupBy.cumprod(axis=0, *args, **kwargs)[source]#
Cumulative product for each group.
Returns
Series or DataFrame
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
| reference/api/pandas.core.groupby.DataFrameGroupBy.cumprod.html |
How to create new columns derived from existing columns? | How to create new columns derived from existing columns?
For this tutorial, air quality data about \(NO_2\) is used, made
available by OpenAQ and using the
py-openaq package.
The air_quality_no2.csv data set provides \(NO_2\) values for
the measurement stations FR04014, BETR801 and London Westminster
in respectively Paris, Antwerp and London.
For this tutorial, air quality data about \(NO_2\) is used, made
available by OpenAQ and using the
py-openaq package.
The air_quality_no2.csv data set provides \(NO_2\) values for
the measurement stations FR04014, BETR801 and London Westminster
in respectively Paris, Antwerp and London.
I want to express the \(NO_2\) concentration of the station in London in mg/m\(^3\).
(If we assume temperature of 25 degrees Celsius and pressure of 1013
hPa, the conversion factor is 1.882)
To create a new column, use the [] brackets with the new column name
at the left side of the assignment. | this tutorial:
Air quality data
For this tutorial, air quality data about \(NO_2\) is used, made
available by OpenAQ and using the
py-openaq package.
The air_quality_no2.csv data set provides \(NO_2\) values for
the measurement stations FR04014, BETR801 and London Westminster
in respectively Paris, Antwerp and London.
To raw data
In [2]: air_quality = pd.read_csv("data/air_quality_no2.csv", index_col=0, parse_dates=True)
In [3]: air_quality.head()
Out[3]:
station_antwerp station_paris station_london
datetime
2019-05-07 02:00:00 NaN NaN 23.0
2019-05-07 03:00:00 50.5 25.0 19.0
2019-05-07 04:00:00 45.0 27.7 19.0
2019-05-07 05:00:00 NaN 50.4 16.0
2019-05-07 06:00:00 NaN 61.9 NaN
How to create new columns derived from existing columns?#
I want to express the \(NO_2\) concentration of the station in London in mg/m\(^3\).
(If we assume temperature of 25 degrees Celsius and pressure of 1013
hPa, the conversion factor is 1.882)
In [4]: air_quality["london_mg_per_cubic"] = air_quality["station_london"] * 1.882
In [5]: air_quality.head()
Out[5]:
station_antwerp ... london_mg_per_cubic
datetime ...
2019-05-07 02:00:00 NaN ... 43.286
2019-05-07 03:00:00 50.5 ... 35.758
2019-05-07 04:00:00 45.0 ... 35.758
2019-05-07 05:00:00 NaN ... 30.112
2019-05-07 06:00:00 NaN ... NaN
[5 rows x 4 columns]
To create a new column, use the [] brackets with the new column name
at the left side of the assignment.
Note
The calculation of the values is done element-wise. This
means all values in the given column are multiplied by the value 1.882
at once. You do not need to use a loop to iterate each of the rows!
I want to check the ratio of the values in Paris versus Antwerp and save the result in a new column.
In [6]: air_quality["ratio_paris_antwerp"] = (
...: air_quality["station_paris"] / air_quality["station_antwerp"]
...: )
...:
In [7]: air_quality.head()
Out[7]:
station_antwerp ... ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN ... NaN
2019-05-07 03:00:00 50.5 ... 0.495050
2019-05-07 04:00:00 45.0 ... 0.615556
2019-05-07 05:00:00 NaN ... NaN
2019-05-07 06:00:00 NaN ... NaN
[5 rows x 5 columns]
The calculation is again element-wise, so the / is applied for the
values in each row.
Also other mathematical operators (+, -, *, /,…) or
logical operators (<, >, ==,…) work element-wise. The latter was already
used in the subset data tutorial to filter
rows of a table using a conditional expression.
If you need more advanced logic, you can use arbitrary Python code via apply().
I want to rename the data columns to the corresponding station identifiers used by OpenAQ.
In [8]: air_quality_renamed = air_quality.rename(
...: columns={
...: "station_antwerp": "BETR801",
...: "station_paris": "FR04014",
...: "station_london": "London Westminster",
...: }
...: )
...:
In [9]: air_quality_renamed.head()
Out[9]:
BETR801 FR04014 ... london_mg_per_cubic ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN NaN ... 43.286 NaN
2019-05-07 03:00:00 50.5 25.0 ... 35.758 0.495050
2019-05-07 04:00:00 45.0 27.7 ... 35.758 0.615556
2019-05-07 05:00:00 NaN 50.4 ... 30.112 NaN
2019-05-07 06:00:00 NaN 61.9 ... NaN NaN
[5 rows x 5 columns]
The rename() function can be used for both row labels and column
labels. Provide a dictionary with the keys the current names and the
values the new names to update the corresponding names.
The mapping should not be restricted to fixed names only, but can be a
mapping function as well. For example, converting the column names to
lowercase letters can be done using a function as well:
In [10]: air_quality_renamed = air_quality_renamed.rename(columns=str.lower)
In [11]: air_quality_renamed.head()
Out[11]:
betr801 fr04014 ... london_mg_per_cubic ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN NaN ... 43.286 NaN
2019-05-07 03:00:00 50.5 25.0 ... 35.758 0.495050
2019-05-07 04:00:00 45.0 27.7 ... 35.758 0.615556
2019-05-07 05:00:00 NaN 50.4 ... 30.112 NaN
2019-05-07 06:00:00 NaN 61.9 ... NaN NaN
[5 rows x 5 columns]
To user guideDetails about column or row label renaming is provided in the user guide section on renaming labels.
REMEMBER
Create a new column by assigning the output to the DataFrame with a
new column name in between the [].
Operations are element-wise, no need to loop over rows.
Use rename with a dictionary or function to rename row labels or
column names.
To user guideThe user guide contains a separate section on column addition and deletion.
| getting_started/intro_tutorials/05_add_columns.html |
Options and settings | API for configuring global behavior. See the User Guide for more.
Working with options#
describe_option(pat[, _print_desc])
Prints the description for one or more registered options.
reset_option(pat)
Reset one or more options to their default value.
get_option(pat)
Retrieves the value of the specified option.
set_option(pat, value)
Sets the value of the specified option.
option_context(*args)
Context manager to temporarily set options in the with statement context.
| reference/options.html | null |
pandas.Series.str.slice_replace | `pandas.Series.str.slice_replace`
Replace a positional slice of a string with another value.
```
>>> s = pd.Series(['a', 'ab', 'abc', 'abdc', 'abcde'])
>>> s
0 a
1 ab
2 abc
3 abdc
4 abcde
dtype: object
``` | Series.str.slice_replace(start=None, stop=None, repl=None)[source]#
Replace a positional slice of a string with another value.
Parameters
startint, optionalLeft index position to use for the slice. If not specified (None),
the slice is unbounded on the left, i.e. slice from the start
of the string.
stopint, optionalRight index position to use for the slice. If not specified (None),
the slice is unbounded on the right, i.e. slice until the
end of the string.
replstr, optionalString for replacement. If not specified (None), the sliced region
is replaced with an empty string.
Returns
Series or IndexSame type as the original object.
See also
Series.str.sliceJust slicing without replacement.
Examples
>>> s = pd.Series(['a', 'ab', 'abc', 'abdc', 'abcde'])
>>> s
0 a
1 ab
2 abc
3 abdc
4 abcde
dtype: object
Specify just start, meaning replace start until the end of the
string with repl.
>>> s.str.slice_replace(1, repl='X')
0 aX
1 aX
2 aX
3 aX
4 aX
dtype: object
Specify just stop, meaning the start of the string to stop is replaced
with repl, and the rest of the string is included.
>>> s.str.slice_replace(stop=2, repl='X')
0 X
1 X
2 Xc
3 Xdc
4 Xcde
dtype: object
Specify start and stop, meaning the slice from start to stop is
replaced with repl. Everything before or after start and stop is
included as is.
>>> s.str.slice_replace(start=1, stop=3, repl='X')
0 aX
1 aX
2 aX
3 aXc
4 aXde
dtype: object
| reference/api/pandas.Series.str.slice_replace.html |
pandas.DataFrame.isna | `pandas.DataFrame.isna`
Detect missing values.
```
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
``` | DataFrame.isna()[source]#
Detect missing values.
Return a boolean same-sized object indicating if the values are NA.
NA values, such as None or numpy.NaN, gets mapped to True
values.
Everything else gets mapped to False values. Characters such as empty
strings '' or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
Returns
DataFrameMask of bool values for each element in DataFrame that
indicates whether an element is an NA value.
See also
DataFrame.isnullAlias of isna.
DataFrame.notnaBoolean inverse of isna.
DataFrame.dropnaOmit axes labels with missing values.
isnaTop-level isna.
Examples
Show which entries in a DataFrame are NA.
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
Show which entries in a Series are NA.
>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0 5.0
1 6.0
2 NaN
dtype: float64
>>> ser.isna()
0 False
1 False
2 True
dtype: bool
| reference/api/pandas.DataFrame.isna.html |
pandas.tseries.offsets.BusinessMonthBegin.n | pandas.tseries.offsets.BusinessMonthBegin.n | BusinessMonthBegin.n#
| reference/api/pandas.tseries.offsets.BusinessMonthBegin.n.html |
pandas.io.stata.StataReader.value_labels | `pandas.io.stata.StataReader.value_labels`
Return a nested dict associating each variable name to its value and label. | StataReader.value_labels()[source]#
Return a nested dict associating each variable name to its value and label.
Returns
dict
| reference/api/pandas.io.stata.StataReader.value_labels.html |
pandas.IntervalIndex.is_overlapping | `pandas.IntervalIndex.is_overlapping`
Return True if the IntervalIndex has overlapping intervals, else False.
Two intervals overlap if they share a common point, including closed
endpoints. Intervals that only have an open endpoint in common do not
overlap.
```
>>> index = pd.IntervalIndex.from_tuples([(0, 2), (1, 3), (4, 5)])
>>> index
IntervalIndex([(0, 2], (1, 3], (4, 5]],
dtype='interval[int64, right]')
>>> index.is_overlapping
True
``` | property IntervalIndex.is_overlapping[source]#
Return True if the IntervalIndex has overlapping intervals, else False.
Two intervals overlap if they share a common point, including closed
endpoints. Intervals that only have an open endpoint in common do not
overlap.
Returns
boolBoolean indicating if the IntervalIndex has overlapping intervals.
See also
Interval.overlapsCheck whether two Interval objects overlap.
IntervalIndex.overlapsCheck an IntervalIndex elementwise for overlaps.
Examples
>>> index = pd.IntervalIndex.from_tuples([(0, 2), (1, 3), (4, 5)])
>>> index
IntervalIndex([(0, 2], (1, 3], (4, 5]],
dtype='interval[int64, right]')
>>> index.is_overlapping
True
Intervals that share closed endpoints overlap:
>>> index = pd.interval_range(0, 3, closed='both')
>>> index
IntervalIndex([[0, 1], [1, 2], [2, 3]],
dtype='interval[int64, both]')
>>> index.is_overlapping
True
Intervals that only have an open endpoint in common do not overlap:
>>> index = pd.interval_range(0, 3, closed='left')
>>> index
IntervalIndex([[0, 1), [1, 2), [2, 3)],
dtype='interval[int64, left]')
>>> index.is_overlapping
False
| reference/api/pandas.IntervalIndex.is_overlapping.html |
pandas.core.groupby.DataFrameGroupBy.skew | `pandas.core.groupby.DataFrameGroupBy.skew`
Return unbiased skew over requested axis. | property DataFrameGroupBy.skew[source]#
Return unbiased skew over requested axis.
Normalized by N-1.
Parameters
axis{index (0), columns (1)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
Series or DataFrame (if level specified)
| reference/api/pandas.core.groupby.DataFrameGroupBy.skew.html |
pandas.tseries.offsets.Easter.onOffset | pandas.tseries.offsets.Easter.onOffset | Easter.onOffset()#
| reference/api/pandas.tseries.offsets.Easter.onOffset.html |
pandas.tseries.offsets.Tick.rollforward | `pandas.tseries.offsets.Tick.rollforward`
Roll provided date forward to next offset only if not on offset. | Tick.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.Tick.rollforward.html |
pandas.Index.view | pandas.Index.view | Index.view(cls=None)[source]#
| reference/api/pandas.Index.view.html |
pandas.tseries.offsets.BYearBegin.onOffset | pandas.tseries.offsets.BYearBegin.onOffset | BYearBegin.onOffset()#
| reference/api/pandas.tseries.offsets.BYearBegin.onOffset.html |
pandas.to_timedelta | `pandas.to_timedelta`
Convert argument to timedelta.
Timedeltas are absolute differences in times, expressed in difference
units (e.g. days, hours, minutes, seconds). This method converts
an argument from a recognized timedelta format / value into
a Timedelta type.
```
>>> pd.to_timedelta('1 days 06:05:01.00003')
Timedelta('1 days 06:05:01.000030')
>>> pd.to_timedelta('15.5us')
Timedelta('0 days 00:00:00.000015500')
``` | pandas.to_timedelta(arg, unit=None, errors='raise')[source]#
Convert argument to timedelta.
Timedeltas are absolute differences in times, expressed in difference
units (e.g. days, hours, minutes, seconds). This method converts
an argument from a recognized timedelta format / value into
a Timedelta type.
Parameters
argstr, timedelta, list-like or SeriesThe data to be converted to timedelta.
Deprecated since version 1.2: Strings with units ‘M’, ‘Y’ and ‘y’ do not represent
unambiguous timedelta values and will be removed in a future version
unitstr, optionalDenotes the unit of the arg for numeric arg. Defaults to "ns".
Possible values:
‘W’
‘D’ / ‘days’ / ‘day’
‘hours’ / ‘hour’ / ‘hr’ / ‘h’
‘m’ / ‘minute’ / ‘min’ / ‘minutes’ / ‘T’
‘S’ / ‘seconds’ / ‘sec’ / ‘second’
‘ms’ / ‘milliseconds’ / ‘millisecond’ / ‘milli’ / ‘millis’ / ‘L’
‘us’ / ‘microseconds’ / ‘microsecond’ / ‘micro’ / ‘micros’ / ‘U’
‘ns’ / ‘nanoseconds’ / ‘nano’ / ‘nanos’ / ‘nanosecond’ / ‘N’
Changed in version 1.1.0: Must not be specified when arg context strings and
errors="raise".
errors{‘ignore’, ‘raise’, ‘coerce’}, default ‘raise’
If ‘raise’, then invalid parsing will raise an exception.
If ‘coerce’, then invalid parsing will be set as NaT.
If ‘ignore’, then invalid parsing will return the input.
Returns
timedeltaIf parsing succeeded.
Return type depends on input:
list-like: TimedeltaIndex of timedelta64 dtype
Series: Series of timedelta64 dtype
scalar: Timedelta
See also
DataFrame.astypeCast argument to a specified dtype.
to_datetimeConvert argument to datetime.
convert_dtypesConvert dtypes.
Notes
If the precision is higher than nanoseconds, the precision of the duration is
truncated to nanoseconds for string inputs.
Examples
Parsing a single string to a Timedelta:
>>> pd.to_timedelta('1 days 06:05:01.00003')
Timedelta('1 days 06:05:01.000030')
>>> pd.to_timedelta('15.5us')
Timedelta('0 days 00:00:00.000015500')
Parsing a list or array of strings:
>>> pd.to_timedelta(['1 days 06:05:01.00003', '15.5us', 'nan'])
TimedeltaIndex(['1 days 06:05:01.000030', '0 days 00:00:00.000015500', NaT],
dtype='timedelta64[ns]', freq=None)
Converting numbers by specifying the unit keyword argument:
>>> pd.to_timedelta(np.arange(5), unit='s')
TimedeltaIndex(['0 days 00:00:00', '0 days 00:00:01', '0 days 00:00:02',
'0 days 00:00:03', '0 days 00:00:04'],
dtype='timedelta64[ns]', freq=None)
>>> pd.to_timedelta(np.arange(5), unit='d')
TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'],
dtype='timedelta64[ns]', freq=None)
| reference/api/pandas.to_timedelta.html |
pandas.DataFrame.plot.bar | `pandas.DataFrame.plot.bar`
Vertical bar plot.
```
>>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]})
>>> ax = df.plot.bar(x='lab', y='val', rot=0)
``` | DataFrame.plot.bar(x=None, y=None, **kwargs)[source]#
Vertical bar plot.
A bar plot is a plot that presents categorical data with
rectangular bars with lengths proportional to the values that they
represent. A bar plot shows comparisons among discrete categories. One
axis of the plot shows the specific categories being compared, and the
other axis represents a measured value.
Parameters
xlabel or position, optionalAllows plotting of one column versus another. If not specified,
the index of the DataFrame is used.
ylabel or position, optionalAllows plotting of one column versus another. If not specified,
all numerical columns are used.
colorstr, array-like, or dict, optionalThe color for each of the DataFrame’s columns. Possible values are:
A single color string referred to by name, RGB or RGBA code,for instance ‘red’ or ‘#a98d19’.
A sequence of color strings referred to by name, RGB or RGBAcode, which will be used for each column recursively. For
instance [‘green’,’yellow’] each column’s bar will be filled in
green or yellow, alternatively. If there is only a single column to
be plotted, then only the first color from the color list will be
used.
A dict of the form {column namecolor}, so that each column will becolored accordingly. For example, if your columns are called a and
b, then passing {‘a’: ‘green’, ‘b’: ‘red’} will color bars for
column a in green and bars for column b in red.
New in version 1.1.0.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
matplotlib.axes.Axes or np.ndarray of themAn ndarray is returned with one matplotlib.axes.Axes
per column when subplots=True.
See also
DataFrame.plot.barhHorizontal bar plot.
DataFrame.plotMake plots of a DataFrame.
matplotlib.pyplot.barMake a bar plot with matplotlib.
Examples
Basic plot.
>>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]})
>>> ax = df.plot.bar(x='lab', y='val', rot=0)
Plot a whole dataframe to a bar plot. Each column is assigned a
distinct color, and each row is nested in a group along the
horizontal axis.
>>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
>>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
>>> index = ['snail', 'pig', 'elephant',
... 'rabbit', 'giraffe', 'coyote', 'horse']
>>> df = pd.DataFrame({'speed': speed,
... 'lifespan': lifespan}, index=index)
>>> ax = df.plot.bar(rot=0)
Plot stacked bar charts for the DataFrame
>>> ax = df.plot.bar(stacked=True)
Instead of nesting, the figure can be split by column with
subplots=True. In this case, a numpy.ndarray of
matplotlib.axes.Axes are returned.
>>> axes = df.plot.bar(rot=0, subplots=True)
>>> axes[1].legend(loc=2)
If you don’t like the default colours, you can specify how you’d
like each column to be colored.
>>> axes = df.plot.bar(
... rot=0, subplots=True, color={"speed": "red", "lifespan": "green"}
... )
>>> axes[1].legend(loc=2)
Plot a single column.
>>> ax = df.plot.bar(y='speed', rot=0)
Plot only selected categories for the DataFrame.
>>> ax = df.plot.bar(x='lifespan', rot=0)
| reference/api/pandas.DataFrame.plot.bar.html |
pandas.tseries.offsets.YearEnd.is_month_end | `pandas.tseries.offsets.YearEnd.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | YearEnd.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.YearEnd.is_month_end.html |
pandas.tseries.offsets.Nano.is_quarter_end | `pandas.tseries.offsets.Nano.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | Nano.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.Nano.is_quarter_end.html |
pandas.tseries.offsets.Hour.normalize | pandas.tseries.offsets.Hour.normalize | Hour.normalize#
| reference/api/pandas.tseries.offsets.Hour.normalize.html |
Testing | Assertion functions#
testing.assert_frame_equal(left, right[, ...])
Check that left and right DataFrame are equal.
testing.assert_series_equal(left, right[, ...])
Check that left and right Series are equal.
testing.assert_index_equal(left, right[, ...])
Check that left and right Index are equal.
testing.assert_extension_array_equal(left, right)
Check that left and right ExtensionArrays are equal.
Exceptions and warnings#
errors.AbstractMethodError(class_instance[, ...])
Raise this error instead of NotImplementedError for abstract methods.
errors.AccessorRegistrationWarning
Warning for attribute conflicts in accessor registration.
errors.AttributeConflictWarning
Warning raised when index attributes conflict when using HDFStore.
errors.CategoricalConversionWarning
Warning is raised when reading a partial labeled Stata file using a iterator.
errors.ClosedFileError
Exception is raised when trying to perform an operation on a closed HDFStore file.
errors.CSSWarning
Warning is raised when converting css styling fails.
errors.DatabaseError
Error is raised when executing sql with bad syntax or sql that throws an error.
errors.DataError
Exceptionn raised when performing an operation on non-numerical data.
errors.DtypeWarning
Warning raised when reading different dtypes in a column from a file.
errors.DuplicateLabelError
Error raised when an operation would introduce duplicate labels.
errors.EmptyDataError
Exception raised in pd.read_csv when empty data or header is encountered.
errors.IncompatibilityWarning
Warning raised when trying to use where criteria on an incompatible HDF5 file.
errors.IndexingError
Exception is raised when trying to index and there is a mismatch in dimensions.
errors.InvalidColumnName
Warning raised by to_stata the column contains a non-valid stata name.
errors.InvalidIndexError
Exception raised when attempting to use an invalid index key.
errors.IntCastingNaNError
Exception raised when converting (astype) an array with NaN to an integer type.
errors.MergeError
Exception raised when merging data.
errors.NullFrequencyError
Exception raised when a freq cannot be null.
errors.NumbaUtilError
Error raised for unsupported Numba engine routines.
errors.NumExprClobberingError
Exception raised when trying to use a built-in numexpr name as a variable name.
errors.OptionError
Exception raised for pandas.options.
errors.OutOfBoundsDatetime
Raised when the datetime is outside the range that can be represented.
errors.OutOfBoundsTimedelta
Raised when encountering a timedelta value that cannot be represented.
errors.ParserError
Exception that is raised by an error encountered in parsing file contents.
errors.ParserWarning
Warning raised when reading a file that doesn't use the default 'c' parser.
errors.PerformanceWarning
Warning raised when there is a possible performance impact.
errors.PossibleDataLossError
Exception raised when trying to open a HDFStore file when already opened.
errors.PossiblePrecisionLoss
Warning raised by to_stata on a column with a value outside or equal to int64.
errors.PyperclipException
Exception raised when clipboard functionality is unsupported.
errors.PyperclipWindowsException(message)
Exception raised when clipboard functionality is unsupported by Windows.
errors.SettingWithCopyError
Exception raised when trying to set on a copied slice from a DataFrame.
errors.SettingWithCopyWarning
Warning raised when trying to set on a copied slice from a DataFrame.
errors.SpecificationError
Exception raised by agg when the functions are ill-specified.
errors.UndefinedVariableError(name[, is_local])
Exception raised by query or eval when using an undefined variable name.
errors.UnsortedIndexError
Error raised when slicing a MultiIndex which has not been lexsorted.
errors.UnsupportedFunctionCall
Exception raised when attempting to call a unsupported numpy function.
errors.ValueLabelTypeMismatch
Warning raised by to_stata on a category column that contains non-string values.
Bug report function#
show_versions([as_json])
Provide useful information, important for bug reports.
Test suite runner#
test([extra_args])
Run the pandas test suite using pytest.
| reference/testing.html | null |
pandas.Timestamp.month | pandas.Timestamp.month | Timestamp.month#
| reference/api/pandas.Timestamp.month.html |
pandas.tseries.offsets.Tick.normalize | pandas.tseries.offsets.Tick.normalize | Tick.normalize#
| reference/api/pandas.tseries.offsets.Tick.normalize.html |
pandas.tseries.offsets.QuarterBegin.freqstr | `pandas.tseries.offsets.QuarterBegin.freqstr`
Return a string representing the frequency.
Examples
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
``` | QuarterBegin.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
| reference/api/pandas.tseries.offsets.QuarterBegin.freqstr.html |
DataFrame | DataFrame | Constructor#
DataFrame([data, index, columns, dtype, copy])
Two-dimensional, size-mutable, potentially heterogeneous tabular data.
Attributes and underlying data#
Axes
DataFrame.index
The index (row labels) of the DataFrame.
DataFrame.columns
The column labels of the DataFrame.
DataFrame.dtypes
Return the dtypes in the DataFrame.
DataFrame.info([verbose, buf, max_cols, ...])
Print a concise summary of a DataFrame.
DataFrame.select_dtypes([include, exclude])
Return a subset of the DataFrame's columns based on the column dtypes.
DataFrame.values
Return a Numpy representation of the DataFrame.
DataFrame.axes
Return a list representing the axes of the DataFrame.
DataFrame.ndim
Return an int representing the number of axes / array dimensions.
DataFrame.size
Return an int representing the number of elements in this object.
DataFrame.shape
Return a tuple representing the dimensionality of the DataFrame.
DataFrame.memory_usage([index, deep])
Return the memory usage of each column in bytes.
DataFrame.empty
Indicator whether Series/DataFrame is empty.
DataFrame.set_flags(*[, copy, ...])
Return a new object with updated flags.
Conversion#
DataFrame.astype(dtype[, copy, errors])
Cast a pandas object to a specified dtype dtype.
DataFrame.convert_dtypes([infer_objects, ...])
Convert columns to best possible dtypes using dtypes supporting pd.NA.
DataFrame.infer_objects()
Attempt to infer better dtypes for object columns.
DataFrame.copy([deep])
Make a copy of this object's indices and data.
DataFrame.bool()
Return the bool of a single element Series or DataFrame.
Indexing, iteration#
DataFrame.head([n])
Return the first n rows.
DataFrame.at
Access a single value for a row/column label pair.
DataFrame.iat
Access a single value for a row/column pair by integer position.
DataFrame.loc
Access a group of rows and columns by label(s) or a boolean array.
DataFrame.iloc
Purely integer-location based indexing for selection by position.
DataFrame.insert(loc, column, value[, ...])
Insert column into DataFrame at specified location.
DataFrame.__iter__()
Iterate over info axis.
DataFrame.items()
Iterate over (column name, Series) pairs.
DataFrame.iteritems()
(DEPRECATED) Iterate over (column name, Series) pairs.
DataFrame.keys()
Get the 'info axis' (see Indexing for more).
DataFrame.iterrows()
Iterate over DataFrame rows as (index, Series) pairs.
DataFrame.itertuples([index, name])
Iterate over DataFrame rows as namedtuples.
DataFrame.lookup(row_labels, col_labels)
(DEPRECATED) Label-based "fancy indexing" function for DataFrame.
DataFrame.pop(item)
Return item and drop from frame.
DataFrame.tail([n])
Return the last n rows.
DataFrame.xs(key[, axis, level, drop_level])
Return cross-section from the Series/DataFrame.
DataFrame.get(key[, default])
Get item from object for given key (ex: DataFrame column).
DataFrame.isin(values)
Whether each element in the DataFrame is contained in values.
DataFrame.where(cond[, other, inplace, ...])
Replace values where the condition is False.
DataFrame.mask(cond[, other, inplace, axis, ...])
Replace values where the condition is True.
DataFrame.query(expr, *[, inplace])
Query the columns of a DataFrame with a boolean expression.
For more information on .at, .iat, .loc, and
.iloc, see the indexing documentation.
Binary operator functions#
DataFrame.add(other[, axis, level, fill_value])
Get Addition of dataframe and other, element-wise (binary operator add).
DataFrame.sub(other[, axis, level, fill_value])
Get Subtraction of dataframe and other, element-wise (binary operator sub).
DataFrame.mul(other[, axis, level, fill_value])
Get Multiplication of dataframe and other, element-wise (binary operator mul).
DataFrame.div(other[, axis, level, fill_value])
Get Floating division of dataframe and other, element-wise (binary operator truediv).
DataFrame.truediv(other[, axis, level, ...])
Get Floating division of dataframe and other, element-wise (binary operator truediv).
DataFrame.floordiv(other[, axis, level, ...])
Get Integer division of dataframe and other, element-wise (binary operator floordiv).
DataFrame.mod(other[, axis, level, fill_value])
Get Modulo of dataframe and other, element-wise (binary operator mod).
DataFrame.pow(other[, axis, level, fill_value])
Get Exponential power of dataframe and other, element-wise (binary operator pow).
DataFrame.dot(other)
Compute the matrix multiplication between the DataFrame and other.
DataFrame.radd(other[, axis, level, fill_value])
Get Addition of dataframe and other, element-wise (binary operator radd).
DataFrame.rsub(other[, axis, level, fill_value])
Get Subtraction of dataframe and other, element-wise (binary operator rsub).
DataFrame.rmul(other[, axis, level, fill_value])
Get Multiplication of dataframe and other, element-wise (binary operator rmul).
DataFrame.rdiv(other[, axis, level, fill_value])
Get Floating division of dataframe and other, element-wise (binary operator rtruediv).
DataFrame.rtruediv(other[, axis, level, ...])
Get Floating division of dataframe and other, element-wise (binary operator rtruediv).
DataFrame.rfloordiv(other[, axis, level, ...])
Get Integer division of dataframe and other, element-wise (binary operator rfloordiv).
DataFrame.rmod(other[, axis, level, fill_value])
Get Modulo of dataframe and other, element-wise (binary operator rmod).
DataFrame.rpow(other[, axis, level, fill_value])
Get Exponential power of dataframe and other, element-wise (binary operator rpow).
DataFrame.lt(other[, axis, level])
Get Less than of dataframe and other, element-wise (binary operator lt).
DataFrame.gt(other[, axis, level])
Get Greater than of dataframe and other, element-wise (binary operator gt).
DataFrame.le(other[, axis, level])
Get Less than or equal to of dataframe and other, element-wise (binary operator le).
DataFrame.ge(other[, axis, level])
Get Greater than or equal to of dataframe and other, element-wise (binary operator ge).
DataFrame.ne(other[, axis, level])
Get Not equal to of dataframe and other, element-wise (binary operator ne).
DataFrame.eq(other[, axis, level])
Get Equal to of dataframe and other, element-wise (binary operator eq).
DataFrame.combine(other, func[, fill_value, ...])
Perform column-wise combine with another DataFrame.
DataFrame.combine_first(other)
Update null elements with value in the same location in other.
Function application, GroupBy & window#
DataFrame.apply(func[, axis, raw, ...])
Apply a function along an axis of the DataFrame.
DataFrame.applymap(func[, na_action])
Apply a function to a Dataframe elementwise.
DataFrame.pipe(func, *args, **kwargs)
Apply chainable functions that expect Series or DataFrames.
DataFrame.agg([func, axis])
Aggregate using one or more operations over the specified axis.
DataFrame.aggregate([func, axis])
Aggregate using one or more operations over the specified axis.
DataFrame.transform(func[, axis])
Call func on self producing a DataFrame with the same axis shape as self.
DataFrame.groupby([by, axis, level, ...])
Group DataFrame using a mapper or by a Series of columns.
DataFrame.rolling(window[, min_periods, ...])
Provide rolling window calculations.
DataFrame.expanding([min_periods, center, ...])
Provide expanding window calculations.
DataFrame.ewm([com, span, halflife, alpha, ...])
Provide exponentially weighted (EW) calculations.
Computations / descriptive stats#
DataFrame.abs()
Return a Series/DataFrame with absolute numeric value of each element.
DataFrame.all([axis, bool_only, skipna, level])
Return whether all elements are True, potentially over an axis.
DataFrame.any(*[, axis, bool_only, skipna, ...])
Return whether any element is True, potentially over an axis.
DataFrame.clip([lower, upper, axis, inplace])
Trim values at input threshold(s).
DataFrame.corr([method, min_periods, ...])
Compute pairwise correlation of columns, excluding NA/null values.
DataFrame.corrwith(other[, axis, drop, ...])
Compute pairwise correlation.
DataFrame.count([axis, level, numeric_only])
Count non-NA cells for each column or row.
DataFrame.cov([min_periods, ddof, numeric_only])
Compute pairwise covariance of columns, excluding NA/null values.
DataFrame.cummax([axis, skipna])
Return cumulative maximum over a DataFrame or Series axis.
DataFrame.cummin([axis, skipna])
Return cumulative minimum over a DataFrame or Series axis.
DataFrame.cumprod([axis, skipna])
Return cumulative product over a DataFrame or Series axis.
DataFrame.cumsum([axis, skipna])
Return cumulative sum over a DataFrame or Series axis.
DataFrame.describe([percentiles, include, ...])
Generate descriptive statistics.
DataFrame.diff([periods, axis])
First discrete difference of element.
DataFrame.eval(expr, *[, inplace])
Evaluate a string describing operations on DataFrame columns.
DataFrame.kurt([axis, skipna, level, ...])
Return unbiased kurtosis over requested axis.
DataFrame.kurtosis([axis, skipna, level, ...])
Return unbiased kurtosis over requested axis.
DataFrame.mad([axis, skipna, level])
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
DataFrame.max([axis, skipna, level, ...])
Return the maximum of the values over the requested axis.
DataFrame.mean([axis, skipna, level, ...])
Return the mean of the values over the requested axis.
DataFrame.median([axis, skipna, level, ...])
Return the median of the values over the requested axis.
DataFrame.min([axis, skipna, level, ...])
Return the minimum of the values over the requested axis.
DataFrame.mode([axis, numeric_only, dropna])
Get the mode(s) of each element along the selected axis.
DataFrame.pct_change([periods, fill_method, ...])
Percentage change between the current and a prior element.
DataFrame.prod([axis, skipna, level, ...])
Return the product of the values over the requested axis.
DataFrame.product([axis, skipna, level, ...])
Return the product of the values over the requested axis.
DataFrame.quantile([q, axis, numeric_only, ...])
Return values at the given quantile over requested axis.
DataFrame.rank([axis, method, numeric_only, ...])
Compute numerical data ranks (1 through n) along axis.
DataFrame.round([decimals])
Round a DataFrame to a variable number of decimal places.
DataFrame.sem([axis, skipna, level, ddof, ...])
Return unbiased standard error of the mean over requested axis.
DataFrame.skew([axis, skipna, level, ...])
Return unbiased skew over requested axis.
DataFrame.sum([axis, skipna, level, ...])
Return the sum of the values over the requested axis.
DataFrame.std([axis, skipna, level, ddof, ...])
Return sample standard deviation over requested axis.
DataFrame.var([axis, skipna, level, ddof, ...])
Return unbiased variance over requested axis.
DataFrame.nunique([axis, dropna])
Count number of distinct elements in specified axis.
DataFrame.value_counts([subset, normalize, ...])
Return a Series containing counts of unique rows in the DataFrame.
Reindexing / selection / label manipulation#
DataFrame.add_prefix(prefix)
Prefix labels with string prefix.
DataFrame.add_suffix(suffix)
Suffix labels with string suffix.
DataFrame.align(other[, join, axis, level, ...])
Align two objects on their axes with the specified join method.
DataFrame.at_time(time[, asof, axis])
Select values at particular time of day (e.g., 9:30AM).
DataFrame.between_time(start_time, end_time)
Select values between particular times of the day (e.g., 9:00-9:30 AM).
DataFrame.drop([labels, axis, index, ...])
Drop specified labels from rows or columns.
DataFrame.drop_duplicates([subset, keep, ...])
Return DataFrame with duplicate rows removed.
DataFrame.duplicated([subset, keep])
Return boolean Series denoting duplicate rows.
DataFrame.equals(other)
Test whether two objects contain the same elements.
DataFrame.filter([items, like, regex, axis])
Subset the dataframe rows or columns according to the specified index labels.
DataFrame.first(offset)
Select initial periods of time series data based on a date offset.
DataFrame.head([n])
Return the first n rows.
DataFrame.idxmax([axis, skipna, numeric_only])
Return index of first occurrence of maximum over requested axis.
DataFrame.idxmin([axis, skipna, numeric_only])
Return index of first occurrence of minimum over requested axis.
DataFrame.last(offset)
Select final periods of time series data based on a date offset.
DataFrame.reindex([labels, index, columns, ...])
Conform Series/DataFrame to new index with optional filling logic.
DataFrame.reindex_like(other[, method, ...])
Return an object with matching indices as other object.
DataFrame.rename([mapper, index, columns, ...])
Alter axes labels.
DataFrame.rename_axis([mapper, inplace])
Set the name of the axis for the index or columns.
DataFrame.reset_index([level, drop, ...])
Reset the index, or a level of it.
DataFrame.sample([n, frac, replace, ...])
Return a random sample of items from an axis of object.
DataFrame.set_axis(labels, *[, axis, ...])
Assign desired index to given axis.
DataFrame.set_index(keys, *[, drop, append, ...])
Set the DataFrame index using existing columns.
DataFrame.tail([n])
Return the last n rows.
DataFrame.take(indices[, axis, is_copy])
Return the elements in the given positional indices along an axis.
DataFrame.truncate([before, after, axis, copy])
Truncate a Series or DataFrame before and after some index value.
Missing data handling#
DataFrame.backfill(*[, axis, inplace, ...])
Synonym for DataFrame.fillna() with method='bfill'.
DataFrame.bfill(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='bfill'.
DataFrame.dropna(*[, axis, how, thresh, ...])
Remove missing values.
DataFrame.ffill(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='ffill'.
DataFrame.fillna([value, method, axis, ...])
Fill NA/NaN values using the specified method.
DataFrame.interpolate([method, axis, limit, ...])
Fill NaN values using an interpolation method.
DataFrame.isna()
Detect missing values.
DataFrame.isnull()
DataFrame.isnull is an alias for DataFrame.isna.
DataFrame.notna()
Detect existing (non-missing) values.
DataFrame.notnull()
DataFrame.notnull is an alias for DataFrame.notna.
DataFrame.pad(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='ffill'.
DataFrame.replace([to_replace, value, ...])
Replace values given in to_replace with value.
Reshaping, sorting, transposing#
DataFrame.droplevel(level[, axis])
Return Series/DataFrame with requested index / column level(s) removed.
DataFrame.pivot(*[, index, columns, values])
Return reshaped DataFrame organized by given index / column values.
DataFrame.pivot_table([values, index, ...])
Create a spreadsheet-style pivot table as a DataFrame.
DataFrame.reorder_levels(order[, axis])
Rearrange index levels using input order.
DataFrame.sort_values(by, *[, axis, ...])
Sort by the values along either axis.
DataFrame.sort_index(*[, axis, level, ...])
Sort object by labels (along an axis).
DataFrame.nlargest(n, columns[, keep])
Return the first n rows ordered by columns in descending order.
DataFrame.nsmallest(n, columns[, keep])
Return the first n rows ordered by columns in ascending order.
DataFrame.swaplevel([i, j, axis])
Swap levels i and j in a MultiIndex.
DataFrame.stack([level, dropna])
Stack the prescribed level(s) from columns to index.
DataFrame.unstack([level, fill_value])
Pivot a level of the (necessarily hierarchical) index labels.
DataFrame.swapaxes(axis1, axis2[, copy])
Interchange axes and swap values axes appropriately.
DataFrame.melt([id_vars, value_vars, ...])
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
DataFrame.explode(column[, ignore_index])
Transform each element of a list-like to a row, replicating index values.
DataFrame.squeeze([axis])
Squeeze 1 dimensional axis objects into scalars.
DataFrame.to_xarray()
Return an xarray object from the pandas object.
DataFrame.T
DataFrame.transpose(*args[, copy])
Transpose index and columns.
Combining / comparing / joining / merging#
DataFrame.append(other[, ignore_index, ...])
(DEPRECATED) Append rows of other to the end of caller, returning a new object.
DataFrame.assign(**kwargs)
Assign new columns to a DataFrame.
DataFrame.compare(other[, align_axis, ...])
Compare to another DataFrame and show the differences.
DataFrame.join(other[, on, how, lsuffix, ...])
Join columns of another DataFrame.
DataFrame.merge(right[, how, on, left_on, ...])
Merge DataFrame or named Series objects with a database-style join.
DataFrame.update(other[, join, overwrite, ...])
Modify in place using non-NA values from another DataFrame.
Time Series-related#
DataFrame.asfreq(freq[, method, how, ...])
Convert time series to specified frequency.
DataFrame.asof(where[, subset])
Return the last row(s) without any NaNs before where.
DataFrame.shift([periods, freq, axis, ...])
Shift index by desired number of periods with an optional time freq.
DataFrame.slice_shift([periods, axis])
(DEPRECATED) Equivalent to shift without copying data.
DataFrame.tshift([periods, freq, axis])
(DEPRECATED) Shift the time index, using the index's frequency if available.
DataFrame.first_valid_index()
Return index for first non-NA value or None, if no non-NA value is found.
DataFrame.last_valid_index()
Return index for last non-NA value or None, if no non-NA value is found.
DataFrame.resample(rule[, axis, closed, ...])
Resample time-series data.
DataFrame.to_period([freq, axis, copy])
Convert DataFrame from DatetimeIndex to PeriodIndex.
DataFrame.to_timestamp([freq, how, axis, copy])
Cast to DatetimeIndex of timestamps, at beginning of period.
DataFrame.tz_convert(tz[, axis, level, copy])
Convert tz-aware axis to target time zone.
DataFrame.tz_localize(tz[, axis, level, ...])
Localize tz-naive index of a Series or DataFrame to target time zone.
Flags#
Flags refer to attributes of the pandas object. Properties of the dataset (like
the date is was recorded, the URL it was accessed from, etc.) should be stored
in DataFrame.attrs.
Flags(obj, *, allows_duplicate_labels)
Flags that apply to pandas objects.
Metadata#
DataFrame.attrs is a dictionary for storing global metadata for this DataFrame.
Warning
DataFrame.attrs is considered experimental and may change without warning.
DataFrame.attrs
Dictionary of global attributes of this dataset.
Plotting#
DataFrame.plot is both a callable method and a namespace attribute for
specific plotting methods of the form DataFrame.plot.<kind>.
DataFrame.plot([x, y, kind, ax, ....])
DataFrame plotting accessor and method
DataFrame.plot.area([x, y])
Draw a stacked area plot.
DataFrame.plot.bar([x, y])
Vertical bar plot.
DataFrame.plot.barh([x, y])
Make a horizontal bar plot.
DataFrame.plot.box([by])
Make a box plot of the DataFrame columns.
DataFrame.plot.density([bw_method, ind])
Generate Kernel Density Estimate plot using Gaussian kernels.
DataFrame.plot.hexbin(x, y[, C, ...])
Generate a hexagonal binning plot.
DataFrame.plot.hist([by, bins])
Draw one histogram of the DataFrame's columns.
DataFrame.plot.kde([bw_method, ind])
Generate Kernel Density Estimate plot using Gaussian kernels.
DataFrame.plot.line([x, y])
Plot Series or DataFrame as lines.
DataFrame.plot.pie(**kwargs)
Generate a pie plot.
DataFrame.plot.scatter(x, y[, s, c])
Create a scatter plot with varying marker point size and color.
DataFrame.boxplot([column, by, ax, ...])
Make a box plot from DataFrame columns.
DataFrame.hist([column, by, grid, ...])
Make a histogram of the DataFrame's columns.
Sparse accessor#
Sparse-dtype specific methods and attributes are provided under the
DataFrame.sparse accessor.
DataFrame.sparse.density
Ratio of non-sparse points to total (dense) data points.
DataFrame.sparse.from_spmatrix(data[, ...])
Create a new DataFrame from a scipy sparse matrix.
DataFrame.sparse.to_coo()
Return the contents of the frame as a sparse SciPy COO matrix.
DataFrame.sparse.to_dense()
Convert a DataFrame with sparse values to dense.
Serialization / IO / conversion#
DataFrame.from_dict(data[, orient, dtype, ...])
Construct DataFrame from dict of array-like or dicts.
DataFrame.from_records(data[, index, ...])
Convert structured or record ndarray to DataFrame.
DataFrame.to_orc([path, engine, index, ...])
Write a DataFrame to the ORC format.
DataFrame.to_parquet([path, engine, ...])
Write a DataFrame to the binary parquet format.
DataFrame.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
DataFrame.to_hdf(path_or_buf, key[, mode, ...])
Write the contained data to an HDF5 file using HDFStore.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
DataFrame.to_dict([orient, into])
Convert the DataFrame to a dictionary.
DataFrame.to_excel(excel_writer[, ...])
Write object to an Excel sheet.
DataFrame.to_json([path_or_buf, orient, ...])
Convert the object to a JSON string.
DataFrame.to_html([buf, columns, col_space, ...])
Render a DataFrame as an HTML table.
DataFrame.to_feather(path, **kwargs)
Write a DataFrame to the binary Feather format.
DataFrame.to_latex([buf, columns, ...])
Render object to a LaTeX tabular, longtable, or nested table.
DataFrame.to_stata(path, *[, convert_dates, ...])
Export DataFrame object to Stata dta format.
DataFrame.to_gbq(destination_table[, ...])
Write a DataFrame to a Google BigQuery table.
DataFrame.to_records([index, column_dtypes, ...])
Convert DataFrame to a NumPy record array.
DataFrame.to_string([buf, columns, ...])
Render a DataFrame to a console-friendly tabular output.
DataFrame.to_clipboard([excel, sep])
Copy object to the system clipboard.
DataFrame.to_markdown([buf, mode, index, ...])
Print DataFrame in Markdown-friendly format.
DataFrame.style
Returns a Styler object.
DataFrame.__dataframe__([nan_as_null, ...])
Return the dataframe interchange object implementing the interchange protocol.
| reference/frame.html |
pandas.core.resample.Resampler.pipe | `pandas.core.resample.Resampler.pipe`
Apply a func with arguments to this Resampler object and return its result.
```
>>> h(g(f(df.groupby('group')), arg1=a), arg2=b, arg3=c)
``` | Resampler.pipe(func, *args, **kwargs)[source]#
Apply a func with arguments to this Resampler object and return its result.
Use .pipe when you want to improve readability by chaining together
functions that expect Series, DataFrames, GroupBy or Resampler objects.
Instead of writing
>>> h(g(f(df.groupby('group')), arg1=a), arg2=b, arg3=c)
You can write
>>> (df.groupby('group')
... .pipe(f)
... .pipe(g, arg1=a)
... .pipe(h, arg2=b, arg3=c))
which is much more readable.
Parameters
funccallable or tuple of (callable, str)Function to apply to this Resampler object or, alternatively,
a (callable, data_keyword) tuple where data_keyword is a
string indicating the keyword of callable that expects the
Resampler object.
argsiterable, optionalPositional arguments passed into func.
kwargsdict, optionalA dictionary of keyword arguments passed into func.
Returns
objectthe return type of func.
See also
Series.pipeApply a function with arguments to a series.
DataFrame.pipeApply a function with arguments to a dataframe.
applyApply function to each group instead of to the full Resampler object.
Notes
See more here
Examples
>>> df = pd.DataFrame({'A': [1, 2, 3, 4]},
... index=pd.date_range('2012-08-02', periods=4))
>>> df
A
2012-08-02 1
2012-08-03 2
2012-08-04 3
2012-08-05 4
To get the difference between each 2-day period’s maximum and minimum
value in one pass, you can do
>>> df.resample('2D').pipe(lambda x: x.max() - x.min())
A
2012-08-02 1
2012-08-04 1
| reference/api/pandas.core.resample.Resampler.pipe.html |
pandas.Series.fillna | `pandas.Series.fillna`
Fill NA/NaN values using the specified method.
Value to use to fill holes (e.g. 0), alternately a
dict/Series/DataFrame of values specifying which value to use for
each index (for a Series) or column (for a DataFrame). Values not
in the dict/Series/DataFrame will not be filled. This value cannot
be a list.
```
>>> df = pd.DataFrame([[np.nan, 2, np.nan, 0],
... [3, 4, np.nan, 1],
... [np.nan, np.nan, np.nan, np.nan],
... [np.nan, 3, np.nan, 4]],
... columns=list("ABCD"))
>>> df
A B C D
0 NaN 2.0 NaN 0.0
1 3.0 4.0 NaN 1.0
2 NaN NaN NaN NaN
3 NaN 3.0 NaN 4.0
``` | Series.fillna(value=None, *, method=None, axis=None, inplace=False, limit=None, downcast=None)[source]#
Fill NA/NaN values using the specified method.
Parameters
valuescalar, dict, Series, or DataFrameValue to use to fill holes (e.g. 0), alternately a
dict/Series/DataFrame of values specifying which value to use for
each index (for a Series) or column (for a DataFrame). Values not
in the dict/Series/DataFrame will not be filled. This value cannot
be a list.
method{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default NoneMethod to use for filling holes in reindexed Series
pad / ffill: propagate last valid observation forward to next valid
backfill / bfill: use next valid observation to fill gap.
axis{0 or ‘index’}Axis along which to fill missing values. For Series
this parameter is unused and defaults to 0.
inplacebool, default FalseIf True, fill in-place. Note: this will modify any
other views on this object (e.g., a no-copy slice for a column in a
DataFrame).
limitint, default NoneIf method is specified, this is the maximum number of consecutive
NaN values to forward/backward fill. In other words, if there is
a gap with more than this number of consecutive NaNs, it will only
be partially filled. If method is not specified, this is the
maximum number of entries along the entire axis where NaNs will be
filled. Must be greater than 0 if not None.
downcastdict, default is NoneA dict of item->dtype of what to downcast if possible,
or the string ‘infer’ which will try to downcast to an appropriate
equal type (e.g. float64 to int64 if possible).
Returns
Series or NoneObject with missing values filled or None if inplace=True.
See also
interpolateFill NaN values using interpolation.
reindexConform object to new index.
asfreqConvert TimeSeries to specified frequency.
Examples
>>> df = pd.DataFrame([[np.nan, 2, np.nan, 0],
... [3, 4, np.nan, 1],
... [np.nan, np.nan, np.nan, np.nan],
... [np.nan, 3, np.nan, 4]],
... columns=list("ABCD"))
>>> df
A B C D
0 NaN 2.0 NaN 0.0
1 3.0 4.0 NaN 1.0
2 NaN NaN NaN NaN
3 NaN 3.0 NaN 4.0
Replace all NaN elements with 0s.
>>> df.fillna(0)
A B C D
0 0.0 2.0 0.0 0.0
1 3.0 4.0 0.0 1.0
2 0.0 0.0 0.0 0.0
3 0.0 3.0 0.0 4.0
We can also propagate non-null values forward or backward.
>>> df.fillna(method="ffill")
A B C D
0 NaN 2.0 NaN 0.0
1 3.0 4.0 NaN 1.0
2 3.0 4.0 NaN 1.0
3 3.0 3.0 NaN 4.0
Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1,
2, and 3 respectively.
>>> values = {"A": 0, "B": 1, "C": 2, "D": 3}
>>> df.fillna(value=values)
A B C D
0 0.0 2.0 2.0 0.0
1 3.0 4.0 2.0 1.0
2 0.0 1.0 2.0 3.0
3 0.0 3.0 2.0 4.0
Only replace the first NaN element.
>>> df.fillna(value=values, limit=1)
A B C D
0 0.0 2.0 2.0 0.0
1 3.0 4.0 NaN 1.0
2 NaN 1.0 NaN 3.0
3 NaN 3.0 NaN 4.0
When filling using a DataFrame, replacement happens along
the same column names and same indices
>>> df2 = pd.DataFrame(np.zeros((4, 4)), columns=list("ABCE"))
>>> df.fillna(df2)
A B C D
0 0.0 2.0 0.0 0.0
1 3.0 4.0 0.0 1.0
2 0.0 0.0 0.0 NaN
3 0.0 3.0 0.0 4.0
Note that column D is not affected since it is not present in df2.
| reference/api/pandas.Series.fillna.html |
pandas.api.types.is_complex_dtype | `pandas.api.types.is_complex_dtype`
Check whether the provided array or dtype is of a complex dtype.
```
>>> is_complex_dtype(str)
False
>>> is_complex_dtype(int)
False
>>> is_complex_dtype(np.complex_)
True
>>> is_complex_dtype(np.array(['a', 'b']))
False
>>> is_complex_dtype(pd.Series([1, 2]))
False
>>> is_complex_dtype(np.array([1 + 1j, 5]))
True
``` | pandas.api.types.is_complex_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of a complex dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
booleanWhether or not the array or dtype is of a complex dtype.
Examples
>>> is_complex_dtype(str)
False
>>> is_complex_dtype(int)
False
>>> is_complex_dtype(np.complex_)
True
>>> is_complex_dtype(np.array(['a', 'b']))
False
>>> is_complex_dtype(pd.Series([1, 2]))
False
>>> is_complex_dtype(np.array([1 + 1j, 5]))
True
| reference/api/pandas.api.types.is_complex_dtype.html |
pandas.core.groupby.DataFrameGroupBy.cummax | `pandas.core.groupby.DataFrameGroupBy.cummax`
Cumulative max for each group. | DataFrameGroupBy.cummax(axis=0, numeric_only=False, **kwargs)[source]#
Cumulative max for each group.
Returns
Series or DataFrame
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
| reference/api/pandas.core.groupby.DataFrameGroupBy.cummax.html |
pandas.Series.between_time | `pandas.Series.between_time`
Select values between particular times of the day (e.g., 9:00-9:30 AM).
```
>>> i = pd.date_range('2018-04-09', periods=4, freq='1D20min')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 00:00:00 1
2018-04-10 00:20:00 2
2018-04-11 00:40:00 3
2018-04-12 01:00:00 4
``` | Series.between_time(start_time, end_time, include_start=_NoDefault.no_default, include_end=_NoDefault.no_default, inclusive=None, axis=None)[source]#
Select values between particular times of the day (e.g., 9:00-9:30 AM).
By setting start_time to be later than end_time,
you can get the times that are not between the two times.
Parameters
start_timedatetime.time or strInitial time as a time filter limit.
end_timedatetime.time or strEnd time as a time filter limit.
include_startbool, default TrueWhether the start time needs to be included in the result.
Deprecated since version 1.4.0: Arguments include_start and include_end have been deprecated
to standardize boundary inputs. Use inclusive instead, to set
each bound as closed or open.
include_endbool, default TrueWhether the end time needs to be included in the result.
Deprecated since version 1.4.0: Arguments include_start and include_end have been deprecated
to standardize boundary inputs. Use inclusive instead, to set
each bound as closed or open.
inclusive{“both”, “neither”, “left”, “right”}, default “both”Include boundaries; whether to set each bound as closed or open.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Determine range time on index or columns value.
For Series this parameter is unused and defaults to 0.
Returns
Series or DataFrameData from the original object filtered to the specified dates range.
Raises
TypeErrorIf the index is not a DatetimeIndex
See also
at_timeSelect values at a particular time of the day.
firstSelect initial periods of time series based on a date offset.
lastSelect final periods of time series based on a date offset.
DatetimeIndex.indexer_between_timeGet just the index locations for values between particular times of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='1D20min')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 00:00:00 1
2018-04-10 00:20:00 2
2018-04-11 00:40:00 3
2018-04-12 01:00:00 4
>>> ts.between_time('0:15', '0:45')
A
2018-04-10 00:20:00 2
2018-04-11 00:40:00 3
You get the times that are not between two times by setting
start_time later than end_time:
>>> ts.between_time('0:45', '0:15')
A
2018-04-09 00:00:00 1
2018-04-12 01:00:00 4
| reference/api/pandas.Series.between_time.html |
pandas.Interval.open_left | `pandas.Interval.open_left`
Check if the interval is open on the left side. | Interval.open_left#
Check if the interval is open on the left side.
For the meaning of closed and open see Interval.
Returns
boolTrue if the Interval is not closed on the left-side.
| reference/api/pandas.Interval.open_left.html |
pandas.describe_option | `pandas.describe_option`
Prints the description for one or more registered options. | pandas.describe_option(pat, _print_desc=False) = <pandas._config.config.CallableDynamicDoc object>#
Prints the description for one or more registered options.
Call with no arguments to get a listing for all registered options.
Available options:
compute.[use_bottleneck, use_numba, use_numexpr]
display.[chop_threshold, colheader_justify, column_space, date_dayfirst,
date_yearfirst, encoding, expand_frame_repr, float_format]
display.html.[border, table_schema, use_mathjax]
display.[large_repr]
display.latex.[escape, longtable, multicolumn, multicolumn_format, multirow,
repr]
display.[max_categories, max_columns, max_colwidth, max_dir_items,
max_info_columns, max_info_rows, max_rows, max_seq_items, memory_usage,
min_rows, multi_sparse, notebook_repr_html, pprint_nest_depth, precision,
show_dimensions]
display.unicode.[ambiguous_as_wide, east_asian_width]
display.[width]
io.excel.ods.[reader, writer]
io.excel.xls.[reader, writer]
io.excel.xlsb.[reader]
io.excel.xlsm.[reader, writer]
io.excel.xlsx.[reader, writer]
io.hdf.[default_format, dropna_table]
io.parquet.[engine]
io.sql.[engine]
mode.[chained_assignment, copy_on_write, data_manager, sim_interactive,
string_storage, use_inf_as_na, use_inf_as_null]
plotting.[backend]
plotting.matplotlib.[register_converters]
styler.format.[decimal, escape, formatter, na_rep, precision, thousands]
styler.html.[mathjax]
styler.latex.[environment, hrules, multicol_align, multirow_align]
styler.render.[encoding, max_columns, max_elements, max_rows, repr]
styler.sparse.[columns, index]
Parameters
patstrRegexp pattern. All matching keys will have their description displayed.
_print_descbool, default TrueIf True (default) the description(s) will be printed to stdout.
Otherwise, the description(s) will be returned as a unicode string
(for testing).
Returns
None by default, the description(s) as a unicode string if _print_desc
is False
Notes
Please reference the User Guide for more information.
The available options with its descriptions:
compute.use_bottleneckboolUse the bottleneck library to accelerate if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
compute.use_numbaboolUse the numba engine option for select operations if it is installed,
the default is False
Valid values: False,True
[default: False] [currently: False]
compute.use_numexprboolUse the numexpr library to accelerate computation if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
display.chop_thresholdfloat or Noneif set to a float value, all float values smaller than the given threshold
will be displayed as exactly 0 by repr and friends.
[default: None] [currently: None]
display.colheader_justify‘left’/’right’Controls the justification of column headers. used by DataFrameFormatter.
[default: right] [currently: right]
display.column_space No description available.[default: 12] [currently: 12]
display.date_dayfirstbooleanWhen True, prints and parses dates with the day first, eg 20/01/2005
[default: False] [currently: False]
display.date_yearfirstbooleanWhen True, prints and parses dates with the year first, eg 2005/01/20
[default: False] [currently: False]
display.encodingstr/unicodeDefaults to the detected encoding of the console.
Specifies the encoding to be used for strings returned by to_string,
these are generally strings meant to be displayed on the console.
[default: utf-8] [currently: utf-8]
display.expand_frame_reprbooleanWhether to print out the full DataFrame repr for wide DataFrames across
multiple lines, max_columns is still respected, but the output will
wrap-around across multiple “pages” if its width exceeds display.width.
[default: True] [currently: True]
display.float_formatcallableThe callable should accept a floating point number and return
a string with the desired format of the number. This is used
in some places like SeriesFormatter.
See formats.format.EngFormatter for an example.
[default: None] [currently: None]
display.html.borderintA border=value attribute is inserted in the <table> tag
for the DataFrame HTML repr.
[default: 1] [currently: 1]
display.html.table_schemabooleanWhether to publish a Table Schema representation for frontends
that support it.
(default: False)
[default: False] [currently: False]
display.html.use_mathjaxbooleanWhen True, Jupyter notebook will process table contents using MathJax,
rendering mathematical expressions enclosed by the dollar symbol.
(default: True)
[default: True] [currently: True]
display.large_repr‘truncate’/’info’For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
show a truncated table (the default from 0.13), or switch to the view from
df.info() (the behaviour in earlier versions of pandas).
[default: truncate] [currently: truncate]
display.latex.escapeboolThis specifies if the to_latex method of a Dataframe uses escapes special
characters.
Valid values: False,True
[default: True] [currently: True]
display.latex.longtable :boolThis specifies if the to_latex method of a Dataframe uses the longtable
format.
Valid values: False,True
[default: False] [currently: False]
display.latex.multicolumnboolThis specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: True] [currently: True]
display.latex.multicolumn_formatboolThis specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: l] [currently: l]
display.latex.multirowboolThis specifies if the to_latex method of a Dataframe uses multirows
to pretty-print MultiIndex rows.
Valid values: False,True
[default: False] [currently: False]
display.latex.reprbooleanWhether to produce a latex DataFrame representation for jupyter
environments that support it.
(default: False)
[default: False] [currently: False]
display.max_categoriesintThis sets the maximum number of categories pandas should output when
printing out a Categorical or a Series of dtype “category”.
[default: 8] [currently: 8]
display.max_columnsintIf max_cols is exceeded, switch to truncate view. Depending on
large_repr, objects are either centrally truncated or printed as
a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr
equals ‘truncate’ this can be set to 0 and pandas will auto-detect
the width of the terminal and print a truncated object which fits
the screen width. The IPython notebook, IPython qtconsole, or IDLE
do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 0] [currently: 0]
display.max_colwidthint or NoneThe maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a “…”
placeholder is embedded in the output. A ‘None’ value means unlimited.
[default: 50] [currently: 50]
display.max_dir_itemsintThe number of items that will be added to dir(…). ‘None’ value means
unlimited. Because dir is cached, changing this option will not immediately
affect already existing dataframes until a column is deleted or added.
This is for instance used to suggest columns from a dataframe to tab
completion.
[default: 100] [currently: 100]
display.max_info_columnsintmax_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
[default: 100] [currently: 100]
display.max_info_rowsint or Nonedf.info() will usually show null-counts for each column.
For large frames this can be quite slow. max_info_rows and max_info_cols
limit this null check only to frames with smaller dimensions than
specified.
[default: 1690785] [currently: 1690785]
display.max_rowsintIf max_rows is exceeded, switch to truncate view. Depending on
large_repr, objects are either centrally truncated or printed as
a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr
equals ‘truncate’ this can be set to 0 and pandas will auto-detect
the height of the terminal and print a truncated object which fits
the screen height. The IPython notebook, IPython qtconsole, or
IDLE do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 60] [currently: 60]
display.max_seq_itemsint or NoneWhen pretty-printing a long sequence, no more then max_seq_items
will be printed. If items are omitted, they will be denoted by the
addition of “…” to the resulting string.
If set to None, the number of items to be printed is unlimited.
[default: 100] [currently: 100]
display.memory_usagebool, string or NoneThis specifies if the memory usage of a DataFrame should be displayed when
df.info() is called. Valid values True,False,’deep’
[default: True] [currently: True]
display.min_rowsintThe numbers of rows to show in a truncated view (when max_rows is
exceeded). Ignored when max_rows is set to None or 0. When set to
None, follows the value of max_rows.
[default: 10] [currently: 10]
display.multi_sparseboolean“sparsify” MultiIndex display (don’t display repeated
elements in outer levels within groups)
[default: True] [currently: True]
display.notebook_repr_htmlbooleanWhen True, IPython notebook will use html representation for
pandas objects (if it is available).
[default: True] [currently: True]
display.pprint_nest_depthintControls the number of nested levels to process when pretty-printing
[default: 3] [currently: 3]
display.precisionintFloating point output precision in terms of number of places after the
decimal, for regular formatting as well as scientific notation. Similar
to precision in numpy.set_printoptions().
[default: 6] [currently: 6]
display.show_dimensionsboolean or ‘truncate’Whether to print out dimensions at the end of DataFrame repr.
If ‘truncate’ is specified, only print out the dimensions if the
frame is truncated (e.g. not display all rows and/or columns)
[default: truncate] [currently: truncate]
display.unicode.ambiguous_as_widebooleanWhether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.unicode.east_asian_widthbooleanWhether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.widthintWidth of the display in characters. In case python/IPython is running in
a terminal this can be set to None and pandas will correctly auto-detect
the width.
Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
io.excel.ods.readerstringThe default Excel reader engine for ‘ods’ files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.ods.writerstringThe default Excel writer engine for ‘ods’ files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.xls.readerstringThe default Excel reader engine for ‘xls’ files. Available options:
auto, xlrd.
[default: auto] [currently: auto]
io.excel.xls.writerstringThe default Excel writer engine for ‘xls’ files. Available options:
auto, xlwt.
[default: auto] [currently: auto]
(Deprecated, use `` instead.)
io.excel.xlsb.readerstringThe default Excel reader engine for ‘xlsb’ files. Available options:
auto, pyxlsb.
[default: auto] [currently: auto]
io.excel.xlsm.readerstringThe default Excel reader engine for ‘xlsm’ files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsm.writerstringThe default Excel writer engine for ‘xlsm’ files. Available options:
auto, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.readerstringThe default Excel reader engine for ‘xlsx’ files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.writerstringThe default Excel writer engine for ‘xlsx’ files. Available options:
auto, openpyxl, xlsxwriter.
[default: auto] [currently: auto]
io.hdf.default_formatformatdefault format writing format, if None, then
put will default to ‘fixed’ and append will default to ‘table’
[default: None] [currently: None]
io.hdf.dropna_tablebooleandrop ALL nan rows when appending to a table
[default: False] [currently: False]
io.parquet.enginestringThe default parquet reader/writer engine. Available options:
‘auto’, ‘pyarrow’, ‘fastparquet’, the default is ‘auto’
[default: auto] [currently: auto]
io.sql.enginestringThe default sql reader/writer engine. Available options:
‘auto’, ‘sqlalchemy’, the default is ‘auto’
[default: auto] [currently: auto]
mode.chained_assignmentstringRaise an exception, warn, or no action if trying to use chained assignment,
The default is warn
[default: warn] [currently: warn]
mode.copy_on_writeboolUse new copy-view behaviour using Copy-on-Write. Defaults to False,
unless overridden by the ‘PANDAS_COPY_ON_WRITE’ environment variable
(if set to “1” for True, needs to be set before pandas is imported).
[default: False] [currently: False]
mode.data_managerstringInternal data manager type; can be “block” or “array”. Defaults to “block”,
unless overridden by the ‘PANDAS_DATA_MANAGER’ environment variable (needs
to be set before pandas is imported).
[default: block] [currently: block]
mode.sim_interactivebooleanWhether to simulate interactive mode for purposes of testing
[default: False] [currently: False]
mode.string_storagestringThe default storage for StringDtype.
[default: python] [currently: python]
mode.use_inf_as_nabooleanTrue means treat None, NaN, INF, -INF as NA (old way),
False means None and NaN are null, but INF, -INF are not NA
(new way).
[default: False] [currently: False]
mode.use_inf_as_nullbooleanuse_inf_as_null had been deprecated and will be removed in a future
version. Use use_inf_as_na instead.
[default: False] [currently: False]
(Deprecated, use mode.use_inf_as_na instead.)
plotting.backendstrThe plotting backend to use. The default value is “matplotlib”, the
backend provided with pandas. Other backends can be specified by
providing the name of the module that implements the backend.
[default: matplotlib] [currently: matplotlib]
plotting.matplotlib.register_convertersbool or ‘auto’.Whether to register converters with matplotlib’s units registry for
dates, times, datetimes, and Periods. Toggling to False will remove
the converters, restoring any converters that pandas overwrote.
[default: auto] [currently: auto]
styler.format.decimalstrThe character representation for the decimal separator for floats and complex.
[default: .] [currently: .]
styler.format.escapestr, optionalWhether to escape certain characters according to the given context; html or latex.
[default: None] [currently: None]
styler.format.formatterstr, callable, dict, optionalA formatter object to be used as default within Styler.format.
[default: None] [currently: None]
styler.format.na_repstr, optionalThe string representation for values identified as missing.
[default: None] [currently: None]
styler.format.precisionintThe precision for floats and complex numbers.
[default: 6] [currently: 6]
styler.format.thousandsstr, optionalThe character representation for thousands separator for floats, int and complex.
[default: None] [currently: None]
styler.html.mathjaxboolIf False will render special CSS classes to table attributes that indicate Mathjax
will not be used in Jupyter Notebook.
[default: True] [currently: True]
styler.latex.environmentstrThe environment to replace \begin{table}. If “longtable” is used results
in a specific longtable environment format.
[default: None] [currently: None]
styler.latex.hrulesboolWhether to add horizontal rules on top and bottom and below the headers.
[default: False] [currently: False]
styler.latex.multicol_align{“r”, “c”, “l”, “naive-l”, “naive-r”}The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe
decorators can also be added to non-naive values to draw vertical
rules, e.g. “|r” will draw a rule on the left side of right aligned merged cells.
[default: r] [currently: r]
styler.latex.multirow_align{“c”, “t”, “b”}The specifier for vertical alignment of sparsified LaTeX multirows.
[default: c] [currently: c]
styler.render.encodingstrThe encoding used for output HTML and LaTeX files.
[default: utf-8] [currently: utf-8]
styler.render.max_columnsint, optionalThe maximum number of columns that will be rendered. May still be reduced to
satsify max_elements, which takes precedence.
[default: None] [currently: None]
styler.render.max_elementsintThe maximum number of data-cell (<td>) elements that will be rendered before
trimming will occur over columns, rows or both if needed.
[default: 262144] [currently: 262144]
styler.render.max_rowsint, optionalThe maximum number of rows that will be rendered. May still be reduced to
satsify max_elements, which takes precedence.
[default: None] [currently: None]
styler.render.reprstrDetermine which output to use in Jupyter Notebook in {“html”, “latex”}.
[default: html] [currently: html]
styler.sparse.columnsboolWhether to sparsify the display of hierarchical columns. Setting to False will
display each explicit level element in a hierarchical key for each column.
[default: True] [currently: True]
styler.sparse.indexboolWhether to sparsify the display of a hierarchical index. Setting to False will
display each explicit level element in a hierarchical key for each row.
[default: True] [currently: True]
| reference/api/pandas.describe_option.html |
pandas.Series.idxmin | `pandas.Series.idxmin`
Return the row label of the minimum value.
If multiple values equal the minimum, the first row label with that
value is returned.
```
>>> s = pd.Series(data=[1, None, 4, 1],
... index=['A', 'B', 'C', 'D'])
>>> s
A 1.0
B NaN
C 4.0
D 1.0
dtype: float64
``` | Series.idxmin(axis=0, skipna=True, *args, **kwargs)[source]#
Return the row label of the minimum value.
If multiple values equal the minimum, the first row label with that
value is returned.
Parameters
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
skipnabool, default TrueExclude NA/null values. If the entire Series is NA, the result
will be NA.
*args, **kwargsAdditional arguments and keywords have no effect but might be
accepted for compatibility with NumPy.
Returns
IndexLabel of the minimum value.
Raises
ValueErrorIf the Series is empty.
See also
numpy.argminReturn indices of the minimum values along the given axis.
DataFrame.idxminReturn index of first occurrence of minimum over requested axis.
Series.idxmaxReturn index label of the first occurrence of maximum of values.
Notes
This method is the Series version of ndarray.argmin. This method
returns the label of the minimum, while ndarray.argmin returns
the position. To get the position, use series.values.argmin().
Examples
>>> s = pd.Series(data=[1, None, 4, 1],
... index=['A', 'B', 'C', 'D'])
>>> s
A 1.0
B NaN
C 4.0
D 1.0
dtype: float64
>>> s.idxmin()
'A'
If skipna is False and there is an NA value in the data,
the function returns nan.
>>> s.idxmin(skipna=False)
nan
| reference/api/pandas.Series.idxmin.html |
pandas.DatetimeIndex.weekofyear | `pandas.DatetimeIndex.weekofyear`
The week ordinal of the year. | property DatetimeIndex.weekofyear[source]#
The week ordinal of the year.
Deprecated since version 1.1.0.
weekofyear and week have been deprecated.
Please use DatetimeIndex.isocalendar().week instead.
| reference/api/pandas.DatetimeIndex.weekofyear.html |
pandas.tseries.offsets.Hour.apply_index | `pandas.tseries.offsets.Hour.apply_index`
Vectorized apply of DateOffset to DatetimeIndex. | Hour.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.Hour.apply_index.html |
pandas.core.window.rolling.Rolling.max | `pandas.core.window.rolling.Rolling.max`
Calculate the rolling maximum.
Include only float, int, boolean columns. | Rolling.max(numeric_only=False, *args, engine=None, engine_kwargs=None, **kwargs)[source]#
Calculate the rolling maximum.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.3.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False}
New in version 1.3.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.maxAggregating max for Series.
pandas.DataFrame.maxAggregating max for DataFrame.
Notes
See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
| reference/api/pandas.core.window.rolling.Rolling.max.html |
pandas.Index.set_value | `pandas.Index.set_value`
Fast lookup of value from 1-dimensional ndarray. | final Index.set_value(arr, key, value)[source]#
Fast lookup of value from 1-dimensional ndarray.
Deprecated since version 1.0.
Notes
Only use this if you know what you’re doing.
| reference/api/pandas.Index.set_value.html |
pandas.tseries.offsets.Easter.is_quarter_end | `pandas.tseries.offsets.Easter.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | Easter.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.Easter.is_quarter_end.html |
pandas.io.formats.style.Styler.set_table_styles | `pandas.io.formats.style.Styler.set_table_styles`
Set the table styles included within the <style> HTML element.
```
>>> df = pd.DataFrame(np.random.randn(10, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df.style.set_table_styles(
... [{'selector': 'tr:hover',
... 'props': [('background-color', 'yellow')]}]
... )
``` | Styler.set_table_styles(table_styles=None, axis=0, overwrite=True, css_class_names=None)[source]#
Set the table styles included within the <style> HTML element.
This function can be used to style the entire table, columns, rows or
specific HTML selectors.
Parameters
table_styleslist or dictIf supplying a list, each individual table_style should be a
dictionary with selector and props keys. selector
should be a CSS selector that the style will be applied to
(automatically prefixed by the table’s UUID) and props
should be a list of tuples with (attribute, value).
If supplying a dict, the dict keys should correspond to
column names or index values, depending upon the specified
axis argument. These will be mapped to row or col CSS
selectors. MultiIndex values as dict keys should be
in their respective tuple form. The dict values should be
a list as specified in the form with CSS selectors and
props that will be applied to the specified row or column.
Changed in version 1.2.0.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Apply to each column (axis=0 or 'index'), to each row
(axis=1 or 'columns'). Only used if table_styles is
dict.
New in version 1.2.0.
overwritebool, default TrueStyles are replaced if True, or extended if False. CSS
rules are preserved so most recent styles set will dominate
if selectors intersect.
New in version 1.2.0.
css_class_namesdict, optionalA dict of strings used to replace the default CSS classes described below.
New in version 1.4.0.
Returns
selfStyler
See also
Styler.set_td_classesSet the DataFrame of strings added to the class attribute of <td> HTML elements.
Styler.set_table_attributesSet the table attributes added to the <table> HTML element.
Notes
The default CSS classes dict, whose values can be replaced is as follows:
css_class_names = {"row_heading": "row_heading",
"col_heading": "col_heading",
"index_name": "index_name",
"col": "col",
"row": "row",
"col_trim": "col_trim",
"row_trim": "row_trim",
"level": "level",
"data": "data",
"blank": "blank",
"foot": "foot"}
Examples
>>> df = pd.DataFrame(np.random.randn(10, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df.style.set_table_styles(
... [{'selector': 'tr:hover',
... 'props': [('background-color', 'yellow')]}]
... )
Or with CSS strings
>>> df.style.set_table_styles(
... [{'selector': 'tr:hover',
... 'props': 'background-color: yellow; font-size: 1em;'}]
... )
Adding column styling by name
>>> df.style.set_table_styles({
... 'A': [{'selector': '',
... 'props': [('color', 'red')]}],
... 'B': [{'selector': 'td',
... 'props': 'color: blue;'}]
... }, overwrite=False)
Adding row styling
>>> df.style.set_table_styles({
... 0: [{'selector': 'td:hover',
... 'props': [('font-size', '25px')]}]
... }, axis=1, overwrite=False)
See Table Visualization user guide for
more details.
| reference/api/pandas.io.formats.style.Styler.set_table_styles.html |
pandas.tseries.offsets.FY5253Quarter.get_weeks | pandas.tseries.offsets.FY5253Quarter.get_weeks | FY5253Quarter.get_weeks()#
| reference/api/pandas.tseries.offsets.FY5253Quarter.get_weeks.html |
pandas.core.groupby.DataFrameGroupBy.corrwith | `pandas.core.groupby.DataFrameGroupBy.corrwith`
Compute pairwise correlation.
```
>>> index = ["a", "b", "c", "d", "e"]
>>> columns = ["one", "two", "three", "four"]
>>> df1 = pd.DataFrame(np.arange(20).reshape(5, 4), index=index, columns=columns)
>>> df2 = pd.DataFrame(np.arange(16).reshape(4, 4), index=index[:4], columns=columns)
>>> df1.corrwith(df2)
one 1.0
two 1.0
three 1.0
four 1.0
dtype: float64
``` | property DataFrameGroupBy.corrwith[source]#
Compute pairwise correlation.
Pairwise correlation is computed between rows or columns of
DataFrame with rows or columns of Series or DataFrame. DataFrames
are first aligned along both axes before computing the
correlations.
Parameters
otherDataFrame, SeriesObject with which to compute correlations.
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to use. 0 or ‘index’ to compute row-wise, 1 or ‘columns’ for
column-wise.
dropbool, default FalseDrop missing indices from result.
method{‘pearson’, ‘kendall’, ‘spearman’} or callableMethod of correlation:
pearson : standard correlation coefficient
kendall : Kendall Tau correlation coefficient
spearman : Spearman rank correlation
callable: callable with input two 1d ndarraysand returning a float.
numeric_onlybool, default TrueInclude only float, int or boolean data.
New in version 1.5.0.
Deprecated since version 1.5.0: The default value of numeric_only will be False in a future
version of pandas.
Returns
SeriesPairwise correlations.
See also
DataFrame.corrCompute pairwise correlation of columns.
Examples
>>> index = ["a", "b", "c", "d", "e"]
>>> columns = ["one", "two", "three", "four"]
>>> df1 = pd.DataFrame(np.arange(20).reshape(5, 4), index=index, columns=columns)
>>> df2 = pd.DataFrame(np.arange(16).reshape(4, 4), index=index[:4], columns=columns)
>>> df1.corrwith(df2)
one 1.0
two 1.0
three 1.0
four 1.0
dtype: float64
>>> df2.corrwith(df1, axis=1)
a 1.0
b 1.0
c 1.0
d 1.0
e NaN
dtype: float64
| reference/api/pandas.core.groupby.DataFrameGroupBy.corrwith.html |
pandas.tseries.offsets.QuarterEnd.is_year_end | `pandas.tseries.offsets.QuarterEnd.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | QuarterEnd.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.QuarterEnd.is_year_end.html |
pandas.Series.corr | `pandas.Series.corr`
Compute correlation with other Series, excluding missing values.
The two Series objects are not required to be the same length and will be
aligned internally before the correlation function is applied.
```
>>> def histogram_intersection(a, b):
... v = np.minimum(a, b).sum().round(decimals=1)
... return v
>>> s1 = pd.Series([.2, .0, .6, .2])
>>> s2 = pd.Series([.3, .6, .0, .1])
>>> s1.corr(s2, method=histogram_intersection)
0.3
``` | Series.corr(other, method='pearson', min_periods=None)[source]#
Compute correlation with other Series, excluding missing values.
The two Series objects are not required to be the same length and will be
aligned internally before the correlation function is applied.
Parameters
otherSeriesSeries with which to compute the correlation.
method{‘pearson’, ‘kendall’, ‘spearman’} or callableMethod used to compute correlation:
pearson : Standard correlation coefficient
kendall : Kendall Tau correlation coefficient
spearman : Spearman rank correlation
callable: Callable with input two 1d ndarrays and returning a float.
Warning
Note that the returned matrix from corr will have 1 along the
diagonals and will be symmetric regardless of the callable’s
behavior.
min_periodsint, optionalMinimum number of observations needed to have a valid result.
Returns
floatCorrelation with other.
See also
DataFrame.corrCompute pairwise correlation between columns.
DataFrame.corrwithCompute pairwise correlation with another DataFrame or Series.
Notes
Pearson, Kendall and Spearman correlation are currently computed using pairwise complete observations.
Pearson correlation coefficient
Kendall rank correlation coefficient
Spearman’s rank correlation coefficient
Examples
>>> def histogram_intersection(a, b):
... v = np.minimum(a, b).sum().round(decimals=1)
... return v
>>> s1 = pd.Series([.2, .0, .6, .2])
>>> s2 = pd.Series([.3, .6, .0, .1])
>>> s1.corr(s2, method=histogram_intersection)
0.3
| reference/api/pandas.Series.corr.html |