title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.Index.is_all_dates | `pandas.Index.is_all_dates`
Whether or not the index values only consist of dates. | Index.is_all_dates[source]#
Whether or not the index values only consist of dates.
| reference/api/pandas.Index.is_all_dates.html |
How to manipulate textual data? | How to manipulate textual data?
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3.
Name: Name of passenger.
Sex: Gender of passenger.
Age: Age of passenger in years.
SibSp: Number of siblings or spouses aboard.
Parch: Number of parents or children aboard.
Ticket: Ticket number of passenger.
Fare: Indicating the fare.
Cabin: Cabin number of passenger.
Embarked: Port of embarkation.
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3. | Data used for this tutorial:
Titanic data
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3.
Name: Name of passenger.
Sex: Gender of passenger.
Age: Age of passenger in years.
SibSp: Number of siblings or spouses aboard.
Parch: Number of parents or children aboard.
Ticket: Ticket number of passenger.
Fare: Indicating the fare.
Cabin: Cabin number of passenger.
Embarked: Port of embarkation.
To raw data
In [2]: titanic = pd.read_csv("data/titanic.csv")
In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
How to manipulate textual data?#
Make all name characters lowercase.
In [4]: titanic["Name"].str.lower()
Out[4]:
0 braund, mr. owen harris
1 cumings, mrs. john bradley (florence briggs th...
2 heikkinen, miss. laina
3 futrelle, mrs. jacques heath (lily may peel)
4 allen, mr. william henry
...
886 montvila, rev. juozas
887 graham, miss. margaret edith
888 johnston, miss. catherine helen "carrie"
889 behr, mr. karl howell
890 dooley, mr. patrick
Name: Name, Length: 891, dtype: object
To make each of the strings in the Name column lowercase, select the Name column
(see the tutorial on selection of data), add the str accessor and
apply the lower method. As such, each of the strings is converted element-wise.
Similar to datetime objects in the time series tutorial
having a dt accessor, a number of
specialized string methods are available when using the str
accessor. These methods have in general matching names with the
equivalent built-in string methods for single elements, but are applied
element-wise (remember element-wise calculations?)
on each of the values of the columns.
Create a new column Surname that contains the surname of the passengers by extracting the part before the comma.
In [5]: titanic["Name"].str.split(",")
Out[5]:
0 [Braund, Mr. Owen Harris]
1 [Cumings, Mrs. John Bradley (Florence Briggs ...
2 [Heikkinen, Miss. Laina]
3 [Futrelle, Mrs. Jacques Heath (Lily May Peel)]
4 [Allen, Mr. William Henry]
...
886 [Montvila, Rev. Juozas]
887 [Graham, Miss. Margaret Edith]
888 [Johnston, Miss. Catherine Helen "Carrie"]
889 [Behr, Mr. Karl Howell]
890 [Dooley, Mr. Patrick]
Name: Name, Length: 891, dtype: object
Using the Series.str.split() method, each of the values is returned as a list of
2 elements. The first element is the part before the comma and the
second element is the part after the comma.
In [6]: titanic["Surname"] = titanic["Name"].str.split(",").str.get(0)
In [7]: titanic["Surname"]
Out[7]:
0 Braund
1 Cumings
2 Heikkinen
3 Futrelle
4 Allen
...
886 Montvila
887 Graham
888 Johnston
889 Behr
890 Dooley
Name: Surname, Length: 891, dtype: object
As we are only interested in the first part representing the surname
(element 0), we can again use the str accessor and apply Series.str.get() to
extract the relevant part. Indeed, these string functions can be
concatenated to combine multiple functions at once!
To user guideMore information on extracting parts of strings is available in the user guide section on splitting and replacing strings.
Extract the passenger data about the countesses on board of the Titanic.
In [8]: titanic["Name"].str.contains("Countess")
Out[8]:
0 False
1 False
2 False
3 False
4 False
...
886 False
887 False
888 False
889 False
890 False
Name: Name, Length: 891, dtype: bool
In [9]: titanic[titanic["Name"].str.contains("Countess")]
Out[9]:
PassengerId Survived Pclass ... Cabin Embarked Surname
759 760 1 1 ... B77 S Rothes
[1 rows x 13 columns]
(Interested in her story? See Wikipedia!)
The string method Series.str.contains() checks for each of the values in the
column Name if the string contains the word Countess and returns
for each of the values True (Countess is part of the name) or
False (Countess is not part of the name). This output can be used
to subselect the data using conditional (boolean) indexing introduced in
the subsetting of data tutorial. As there was
only one countess on the Titanic, we get one row as a result.
Note
More powerful extractions on strings are supported, as the
Series.str.contains() and Series.str.extract() methods accept regular
expressions, but out of
scope of this tutorial.
To user guideMore information on extracting parts of strings is available in the user guide section on string matching and extracting.
Which passenger of the Titanic has the longest name?
In [10]: titanic["Name"].str.len()
Out[10]:
0 23
1 51
2 22
3 44
4 24
..
886 21
887 28
888 40
889 21
890 19
Name: Name, Length: 891, dtype: int64
To get the longest name we first have to get the lengths of each of the
names in the Name column. By using pandas string methods, the
Series.str.len() function is applied to each of the names individually
(element-wise).
In [11]: titanic["Name"].str.len().idxmax()
Out[11]: 307
Next, we need to get the corresponding location, preferably the index
label, in the table for which the name length is the largest. The
idxmax() method does exactly that. It is not a string method and is
applied to integers, so no str is used.
In [12]: titanic.loc[titanic["Name"].str.len().idxmax(), "Name"]
Out[12]: 'Penasco y Castellana, Mrs. Victor de Satode (Maria Josefa Perez de Soto y Vallejo)'
Based on the index name of the row (307) and the column (Name),
we can do a selection using the loc operator, introduced in the
tutorial on subsetting.
In the “Sex” column, replace values of “male” by “M” and values of “female” by “F”.
In [13]: titanic["Sex_short"] = titanic["Sex"].replace({"male": "M", "female": "F"})
In [14]: titanic["Sex_short"]
Out[14]:
0 M
1 F
2 F
3 F
4 M
..
886 M
887 F
888 F
889 M
890 M
Name: Sex_short, Length: 891, dtype: object
Whereas replace() is not a string method, it provides a convenient way
to use mappings or vocabularies to translate certain values. It requires
a dictionary to define the mapping {from : to}.
Warning
There is also a replace() method available to replace a
specific set of characters. However, when having a mapping of multiple
values, this would become:
titanic["Sex_short"] = titanic["Sex"].str.replace("female", "F")
titanic["Sex_short"] = titanic["Sex_short"].str.replace("male", "M")
This would become cumbersome and easily lead to mistakes. Just think (or
try out yourself) what would happen if those two statements are applied
in the opposite order…
REMEMBER
String methods are available using the str accessor.
String methods work element-wise and can be used for conditional
indexing.
The replace method is a convenient method to convert values
according to a given dictionary.
To user guideA full overview is provided in the user guide pages on working with text data.
| getting_started/intro_tutorials/10_text_data.html |
pandas.Index.putmask | `pandas.Index.putmask`
Return a new Index of the values set with the mask. | final Index.putmask(mask, value)[source]#
Return a new Index of the values set with the mask.
Returns
Index
See also
numpy.ndarray.putmaskChanges elements of an array based on conditional and input values.
| reference/api/pandas.Index.putmask.html |
pandas.Index.T | `pandas.Index.T`
Return the transpose, which is by definition self. | property Index.T[source]#
Return the transpose, which is by definition self.
| reference/api/pandas.Index.T.html |
pandas.tseries.offsets.Hour.normalize | pandas.tseries.offsets.Hour.normalize | Hour.normalize#
| reference/api/pandas.tseries.offsets.Hour.normalize.html |
pandas.Series.dt.components | `pandas.Series.dt.components`
Return a Dataframe of the components of the Timedeltas.
Examples
```
>>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='s'))
>>> s
0 0 days 00:00:00
1 0 days 00:00:01
2 0 days 00:00:02
3 0 days 00:00:03
4 0 days 00:00:04
dtype: timedelta64[ns]
>>> s.dt.components
days hours minutes seconds milliseconds microseconds nanoseconds
0 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
2 0 0 0 2 0 0 0
3 0 0 0 3 0 0 0
4 0 0 0 4 0 0 0
``` | Series.dt.components[source]#
Return a Dataframe of the components of the Timedeltas.
Returns
DataFrame
Examples
>>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='s'))
>>> s
0 0 days 00:00:00
1 0 days 00:00:01
2 0 days 00:00:02
3 0 days 00:00:03
4 0 days 00:00:04
dtype: timedelta64[ns]
>>> s.dt.components
days hours minutes seconds milliseconds microseconds nanoseconds
0 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
2 0 0 0 2 0 0 0
3 0 0 0 3 0 0 0
4 0 0 0 4 0 0 0
| reference/api/pandas.Series.dt.components.html |
pandas.core.resample.Resampler.first | `pandas.core.resample.Resampler.first`
Compute the first non-null entry of each column.
```
>>> df = pd.DataFrame(dict(A=[1, 1, 3], B=[None, 5, 6], C=[1, 2, 3],
... D=['3/11/2000', '3/12/2000', '3/13/2000']))
>>> df['D'] = pd.to_datetime(df['D'])
>>> df.groupby("A").first()
B C D
A
1 5.0 1 2000-03-11
3 6.0 3 2000-03-13
>>> df.groupby("A").first(min_count=2)
B C D
A
1 NaN 1.0 2000-03-11
3 NaN NaN NaT
>>> df.groupby("A").first(numeric_only=True)
B C
A
1 5.0 1
3 6.0 3
``` | Resampler.first(numeric_only=_NoDefault.no_default, min_count=0, *args, **kwargs)[source]#
Compute the first non-null entry of each column.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
min_countint, default -1The required number of valid values to perform the operation. If fewer
than min_count non-NA values are present the result will be NA.
Returns
Series or DataFrameFirst non-null of values within each group.
See also
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
DataFrame.core.groupby.GroupBy.lastCompute the last non-null entry of each column.
DataFrame.core.groupby.GroupBy.nthTake the nth row from each group.
Examples
>>> df = pd.DataFrame(dict(A=[1, 1, 3], B=[None, 5, 6], C=[1, 2, 3],
... D=['3/11/2000', '3/12/2000', '3/13/2000']))
>>> df['D'] = pd.to_datetime(df['D'])
>>> df.groupby("A").first()
B C D
A
1 5.0 1 2000-03-11
3 6.0 3 2000-03-13
>>> df.groupby("A").first(min_count=2)
B C D
A
1 NaN 1.0 2000-03-11
3 NaN NaN NaT
>>> df.groupby("A").first(numeric_only=True)
B C
A
1 5.0 1
3 6.0 3
| reference/api/pandas.core.resample.Resampler.first.html |
pandas.DatetimeIndex.to_series | `pandas.DatetimeIndex.to_series`
Create a Series with both index and values equal to the index keys. | DatetimeIndex.to_series(keep_tz=_NoDefault.no_default, index=None, name=None)[source]#
Create a Series with both index and values equal to the index keys.
Useful with map for returning an indexer based on an index.
Parameters
keep_tzoptional, defaults TrueReturn the data keeping the timezone.
If keep_tz is True:
If the timezone is not set, the resulting
Series will have a datetime64[ns] dtype.
Otherwise the Series will have an datetime64[ns, tz] dtype; the
tz will be preserved.
If keep_tz is False:
Series will have a datetime64[ns] dtype. TZ aware
objects will have the tz removed.
Changed in version 1.0.0: The default value is now True. In a future version,
this keyword will be removed entirely. Stop passing the
argument to obtain the future behavior and silence the warning.
indexIndex, optionalIndex of resulting Series. If None, defaults to original index.
namestr, optionalName of resulting Series. If None, defaults to name of original
index.
Returns
Series
| reference/api/pandas.DatetimeIndex.to_series.html |
pandas.Series.divmod | `pandas.Series.divmod`
Return Integer division and modulo of series and other, element-wise (binary operator divmod).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divmod(b, fill_value=0)
(a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64,
a 0.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64)
``` | Series.divmod(other, level=None, fill_value=None, axis=0)[source]#
Return Integer division and modulo of series and other, element-wise (binary operator divmod).
Equivalent to divmod(series, other), but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
2-Tuple of SeriesThe result of the operation.
See also
Series.rdivmodReverse of the Integer division and modulo operator, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divmod(b, fill_value=0)
(a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64,
a 0.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64)
| reference/api/pandas.Series.divmod.html |
pandas.api.indexers.BaseIndexer.get_window_bounds | `pandas.api.indexers.BaseIndexer.get_window_bounds`
Computes the bounds of a window.
number of values that will be aggregated over | BaseIndexer.get_window_bounds(num_values=0, min_periods=None, center=None, closed=None, step=None)[source]#
Computes the bounds of a window.
Parameters
num_valuesint, default 0number of values that will be aggregated over
window_sizeint, default 0the number of rows in a window
min_periodsint, default Nonemin_periods passed from the top level rolling API
centerbool, default Nonecenter passed from the top level rolling API
closedstr, default Noneclosed passed from the top level rolling API
stepint, default Nonestep passed from the top level rolling API
.. versionadded:: 1.5
win_typestr, default Nonewin_type passed from the top level rolling API
Returns
A tuple of ndarray[int64]s, indicating the boundaries of each
window
| reference/api/pandas.api.indexers.BaseIndexer.get_window_bounds.html |
pandas.Series.rename | `pandas.Series.rename`
Alter Series index labels or name.
Function / dict values must be unique (1-to-1). Labels not contained in
a dict / Series will be left as-is. Extra labels listed don’t throw an
error.
```
>>> s = pd.Series([1, 2, 3])
>>> s
0 1
1 2
2 3
dtype: int64
>>> s.rename("my_name") # scalar, changes Series.name
0 1
1 2
2 3
Name: my_name, dtype: int64
>>> s.rename(lambda x: x ** 2) # function, changes labels
0 1
1 2
4 3
dtype: int64
>>> s.rename({1: 3, 2: 5}) # mapping, changes labels
0 1
3 2
5 3
dtype: int64
``` | Series.rename(index=None, *, axis=None, copy=True, inplace=False, level=None, errors='ignore')[source]#
Alter Series index labels or name.
Function / dict values must be unique (1-to-1). Labels not contained in
a dict / Series will be left as-is. Extra labels listed don’t throw an
error.
Alternatively, change Series.name with a scalar value.
See the user guide for more.
Parameters
indexscalar, hashable sequence, dict-like or function optionalFunctions or dict-like are transformations to apply to
the index.
Scalar or hashable sequence-like will alter the Series.name
attribute.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
copybool, default TrueAlso copy underlying data.
inplacebool, default FalseWhether to return a new Series. If True the value of copy is ignored.
levelint or level name, default NoneIn case of MultiIndex, only rename labels in the specified level.
errors{‘ignore’, ‘raise’}, default ‘ignore’If ‘raise’, raise KeyError when a dict-like mapper or
index contains labels that are not present in the index being transformed.
If ‘ignore’, existing keys will be renamed and extra keys will be ignored.
Returns
Series or NoneSeries with index labels or name altered or None if inplace=True.
See also
DataFrame.renameCorresponding DataFrame method.
Series.rename_axisSet the name of the axis.
Examples
>>> s = pd.Series([1, 2, 3])
>>> s
0 1
1 2
2 3
dtype: int64
>>> s.rename("my_name") # scalar, changes Series.name
0 1
1 2
2 3
Name: my_name, dtype: int64
>>> s.rename(lambda x: x ** 2) # function, changes labels
0 1
1 2
4 3
dtype: int64
>>> s.rename({1: 3, 2: 5}) # mapping, changes labels
0 1
3 2
5 3
dtype: int64
| reference/api/pandas.Series.rename.html |
pandas.Timestamp.isoweekday | `pandas.Timestamp.isoweekday`
Return the day of the week represented by the date.
Monday == 1 … Sunday == 7. | Timestamp.isoweekday()#
Return the day of the week represented by the date.
Monday == 1 … Sunday == 7.
| reference/api/pandas.Timestamp.isoweekday.html |
pandas.tseries.offsets.BusinessMonthEnd.copy | `pandas.tseries.offsets.BusinessMonthEnd.copy`
Return a copy of the frequency.
Examples
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
``` | BusinessMonthEnd.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
| reference/api/pandas.tseries.offsets.BusinessMonthEnd.copy.html |
API reference | API reference | This page gives an overview of all public pandas objects, functions and
methods. All classes and functions exposed in pandas.* namespace are public.
Some subpackages are public which include pandas.errors,
pandas.plotting, and pandas.testing. Public functions in
pandas.io and pandas.tseries submodules are mentioned in
the documentation. pandas.api.types subpackage holds some
public functions related to data types in pandas.
Warning
The pandas.core, pandas.compat, and pandas.util top-level modules are PRIVATE. Stable functionality in such modules is not guaranteed.
Input/output
Pickling
Flat file
Clipboard
Excel
JSON
HTML
XML
Latex
HDFStore: PyTables (HDF5)
Feather
Parquet
ORC
SAS
SPSS
SQL
Google BigQuery
STATA
General functions
Data manipulations
Top-level missing data
Top-level dealing with numeric data
Top-level dealing with datetimelike data
Top-level dealing with Interval data
Top-level evaluation
Hashing
Importing from other DataFrame libraries
Series
Constructor
Attributes
Conversion
Indexing, iteration
Binary operator functions
Function application, GroupBy & window
Computations / descriptive stats
Reindexing / selection / label manipulation
Missing data handling
Reshaping, sorting
Combining / comparing / joining / merging
Time Series-related
Accessors
Plotting
Serialization / IO / conversion
DataFrame
Constructor
Attributes and underlying data
Conversion
Indexing, iteration
Binary operator functions
Function application, GroupBy & window
Computations / descriptive stats
Reindexing / selection / label manipulation
Missing data handling
Reshaping, sorting, transposing
Combining / comparing / joining / merging
Time Series-related
Flags
Metadata
Plotting
Sparse accessor
Serialization / IO / conversion
pandas arrays, scalars, and data types
Objects
Utilities
Index objects
Index
Numeric Index
CategoricalIndex
IntervalIndex
MultiIndex
DatetimeIndex
TimedeltaIndex
PeriodIndex
Date offsets
DateOffset
BusinessDay
BusinessHour
CustomBusinessDay
CustomBusinessHour
MonthEnd
MonthBegin
BusinessMonthEnd
BusinessMonthBegin
CustomBusinessMonthEnd
CustomBusinessMonthBegin
SemiMonthEnd
SemiMonthBegin
Week
WeekOfMonth
LastWeekOfMonth
BQuarterEnd
BQuarterBegin
QuarterEnd
QuarterBegin
BYearEnd
BYearBegin
YearEnd
YearBegin
FY5253
FY5253Quarter
Easter
Tick
Day
Hour
Minute
Second
Milli
Micro
Nano
Frequencies
pandas.tseries.frequencies.to_offset
Window
Rolling window functions
Weighted window functions
Expanding window functions
Exponentially-weighted window functions
Window indexer
GroupBy
Indexing, iteration
Function application
Computations / descriptive stats
Resampling
Indexing, iteration
Function application
Upsampling
Computations / descriptive stats
Style
Styler constructor
Styler properties
Style application
Builtin styles
Style export and import
Plotting
pandas.plotting.andrews_curves
pandas.plotting.autocorrelation_plot
pandas.plotting.bootstrap_plot
pandas.plotting.boxplot
pandas.plotting.deregister_matplotlib_converters
pandas.plotting.lag_plot
pandas.plotting.parallel_coordinates
pandas.plotting.plot_params
pandas.plotting.radviz
pandas.plotting.register_matplotlib_converters
pandas.plotting.scatter_matrix
pandas.plotting.table
Options and settings
Working with options
Extensions
pandas.api.extensions.register_extension_dtype
pandas.api.extensions.register_dataframe_accessor
pandas.api.extensions.register_series_accessor
pandas.api.extensions.register_index_accessor
pandas.api.extensions.ExtensionDtype
pandas.api.extensions.ExtensionArray
pandas.arrays.PandasArray
pandas.api.indexers.check_array_indexer
Testing
Assertion functions
Exceptions and warnings
Bug report function
Test suite runner
| reference/index.html |
pandas.tseries.offsets.YearEnd.rollforward | `pandas.tseries.offsets.YearEnd.rollforward`
Roll provided date forward to next offset only if not on offset. | YearEnd.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.YearEnd.rollforward.html |
pandas.core.resample.Resampler.quantile | `pandas.core.resample.Resampler.quantile`
Return value at the given quantile. | Resampler.quantile(q=0.5, **kwargs)[source]#
Return value at the given quantile.
Parameters
qfloat or array-like, default 0.5 (50% quantile)
Returns
DataFrame or SeriesQuantile of values within each group.
See also
Series.quantileReturn a series, where the index is q and the values are the quantiles.
DataFrame.quantileReturn a DataFrame, where the columns are the columns of self, and the values are the quantiles.
DataFrameGroupBy.quantileReturn a DataFrame, where the coulmns are groupby columns, and the values are its quantiles.
| reference/api/pandas.core.resample.Resampler.quantile.html |
pandas.testing.assert_frame_equal | `pandas.testing.assert_frame_equal`
Check that left and right DataFrame are equal.
```
>>> from pandas.testing import assert_frame_equal
>>> df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
>>> df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]})
``` | pandas.testing.assert_frame_equal(left, right, check_dtype=True, check_index_type='equiv', check_column_type='equiv', check_frame_type=True, check_less_precise=_NoDefault.no_default, check_names=True, by_blocks=False, check_exact=False, check_datetimelike_compat=False, check_categorical=True, check_like=False, check_freq=True, check_flags=True, rtol=1e-05, atol=1e-08, obj='DataFrame')[source]#
Check that left and right DataFrame are equal.
This function is intended to compare two DataFrames and output any
differences. It is mostly intended for use in unit tests.
Additional parameters allow varying the strictness of the
equality checks performed.
Parameters
leftDataFrameFirst DataFrame to compare.
rightDataFrameSecond DataFrame to compare.
check_dtypebool, default TrueWhether to check the DataFrame dtype is identical.
check_index_typebool or {‘equiv’}, default ‘equiv’Whether to check the Index class, dtype and inferred_type
are identical.
check_column_typebool or {‘equiv’}, default ‘equiv’Whether to check the columns class, dtype and inferred_type
are identical. Is passed as the exact argument of
assert_index_equal().
check_frame_typebool, default TrueWhether to check the DataFrame class is identical.
check_less_precisebool or int, default FalseSpecify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
If int, then specify the digits to compare.
When comparing two numbers, if the first number has magnitude less
than 1e-5, we compare the two numbers directly and check whether
they are equivalent within the specified precision. Otherwise, we
compare the ratio of the second number to the first number and
check whether it is equivalent to 1 within the specified precision.
Deprecated since version 1.1.0: Use rtol and atol instead to define relative/absolute
tolerance, respectively. Similar to math.isclose().
check_namesbool, default TrueWhether to check that the names attribute for both the index
and column attributes of the DataFrame is identical.
by_blocksbool, default FalseSpecify how to compare internal data. If False, compare by columns.
If True, compare by blocks.
check_exactbool, default FalseWhether to compare number exactly.
check_datetimelike_compatbool, default FalseCompare datetime-like which is comparable ignoring dtype.
check_categoricalbool, default TrueWhether to compare internal Categorical exactly.
check_likebool, default FalseIf True, ignore the order of index & columns.
Note: index labels must match their respective rows
(same as in columns) - same labels must be with the same data.
check_freqbool, default TrueWhether to check the freq attribute on a DatetimeIndex or TimedeltaIndex.
New in version 1.1.0.
check_flagsbool, default TrueWhether to check the flags attribute.
rtolfloat, default 1e-5Relative tolerance. Only used when check_exact is False.
New in version 1.1.0.
atolfloat, default 1e-8Absolute tolerance. Only used when check_exact is False.
New in version 1.1.0.
objstr, default ‘DataFrame’Specify object name being compared, internally used to show appropriate
assertion message.
See also
assert_series_equalEquivalent method for asserting Series equality.
DataFrame.equalsCheck DataFrame equality.
Examples
This example shows comparing two DataFrames that are equal
but with columns of differing dtypes.
>>> from pandas.testing import assert_frame_equal
>>> df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
>>> df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]})
df1 equals itself.
>>> assert_frame_equal(df1, df1)
df1 differs from df2 as column ‘b’ is of a different type.
>>> assert_frame_equal(df1, df2)
Traceback (most recent call last):
...
AssertionError: Attributes of DataFrame.iloc[:, 1] (column name="b") are different
Attribute “dtype” are different
[left]: int64
[right]: float64
Ignore differing dtypes in columns with check_dtype.
>>> assert_frame_equal(df1, df2, check_dtype=False)
| reference/api/pandas.testing.assert_frame_equal.html |
pandas.tseries.offsets.YearEnd.n | pandas.tseries.offsets.YearEnd.n | YearEnd.n#
| reference/api/pandas.tseries.offsets.YearEnd.n.html |
pandas.UInt32Dtype | `pandas.UInt32Dtype`
An ExtensionDtype for uint32 integer data. | class pandas.UInt32Dtype[source]#
An ExtensionDtype for uint32 integer data.
Changed in version 1.0.0: Now uses pandas.NA as its missing value,
rather than numpy.nan.
Attributes
None
Methods
None
| reference/api/pandas.UInt32Dtype.html |
pandas ecosystem | pandas ecosystem | Increasingly, packages are being built on top of pandas to address specific needs
in data preparation, analysis and visualization.
This is encouraging because it means pandas is not only helping users to handle
their data tasks but also that it provides a better starting point for developers to
build powerful and more focused data tools.
The creation of libraries that complement pandas’ functionality also allows pandas
development to remain focused around it’s original requirements.
This is an inexhaustive list of projects that build on pandas in order to provide
tools in the PyData space. For a list of projects that depend on pandas,
see the
Github network dependents for pandas
or search pypi for pandas.
We’d like to make it easier for users to find these projects, if you know of other
substantial projects that you feel should be on this list, please let us know.
Data cleaning and validation#
Pyjanitor#
Pyjanitor provides a clean API for cleaning data, using method chaining.
Pandera#
Pandera provides a flexible and expressive API for performing data validation on dataframes
to make data processing pipelines more readable and robust.
Dataframes contain information that pandera explicitly validates at runtime. This is useful in
production-critical data pipelines or reproducible research settings.
pandas-path#
Since Python 3.4, pathlib has been
included in the Python standard library. Path objects provide a simple
and delightful way to interact with the file system. The pandas-path package enables the
Path API for pandas through a custom accessor .path. Getting just the filenames from
a series of full file paths is as simple as my_files.path.name. Other convenient operations like
joining paths, replacing file extensions, and checking if files exist are also available.
Statistics and machine learning#
pandas-tfrecords#
Easy saving pandas dataframe to tensorflow tfrecords format and reading tfrecords to pandas.
Statsmodels#
Statsmodels is the prominent Python “statistics and econometrics library” and it has
a long-standing special relationship with pandas. Statsmodels provides powerful statistics,
econometrics, analysis and modeling functionality that is out of pandas’ scope.
Statsmodels leverages pandas objects as the underlying data container for computation.
sklearn-pandas#
Use pandas DataFrames in your scikit-learn
ML pipeline.
Featuretools#
Featuretools is a Python library for automated feature engineering built on top of pandas. It excels at transforming temporal and relational datasets into feature matrices for machine learning using reusable feature engineering “primitives”. Users can contribute their own primitives in Python and share them with the rest of the community.
Compose#
Compose is a machine learning tool for labeling data and prediction engineering. It allows you to structure the labeling process by parameterizing prediction problems and transforming time-driven relational data into target values with cutoff times that can be used for supervised learning.
STUMPY#
STUMPY is a powerful and scalable Python library for modern time series analysis.
At its core, STUMPY efficiently computes something called a
matrix profile,
which can be used for a wide variety of time series data mining tasks.
Visualization#
Pandas has its own Styler class for table visualization, and while
pandas also has built-in support for data visualization through charts with matplotlib,
there are a number of other pandas-compatible libraries.
Altair#
Altair is a declarative statistical visualization library for Python.
With Altair, you can spend more time understanding your data and its
meaning. Altair’s API is simple, friendly and consistent and built on
top of the powerful Vega-Lite JSON specification. This elegant
simplicity produces beautiful and effective visualizations with a
minimal amount of code. Altair works with pandas DataFrames.
Bokeh#
Bokeh is a Python interactive visualization library for large datasets that natively uses
the latest web technologies. Its goal is to provide elegant, concise construction of novel
graphics in the style of Protovis/D3, while delivering high-performance interactivity over
large data to thin clients.
Pandas-Bokeh provides a high level API
for Bokeh that can be loaded as a native pandas plotting backend via
pd.set_option("plotting.backend", "pandas_bokeh")
It is very similar to the matplotlib plotting backend, but provides interactive
web-based charts and maps.
Seaborn#
Seaborn is a Python visualization library based on
matplotlib. It provides a high-level, dataset-oriented
interface for creating attractive statistical graphics. The plotting functions
in seaborn understand pandas objects and leverage pandas grouping operations
internally to support concise specification of complex visualizations. Seaborn
also goes beyond matplotlib and pandas with the option to perform statistical
estimation while plotting, aggregating across observations and visualizing the
fit of statistical models to emphasize patterns in a dataset.
plotnine#
Hadley Wickham’s ggplot2 is a foundational exploratory visualization package for the R language.
Based on “The Grammar of Graphics” it
provides a powerful, declarative and extremely general way to generate bespoke plots of any kind of data.
Various implementations to other languages are available.
A good implementation for Python users is has2k1/plotnine.
IPython vega#
IPython Vega leverages Vega to create plots within Jupyter Notebook.
Plotly#
Plotly’s Python API enables interactive figures and web shareability. Maps, 2D, 3D, and live-streaming graphs are rendered with WebGL and D3.js. The library supports plotting directly from a pandas DataFrame and cloud-based collaboration. Users of matplotlib, ggplot for Python, and Seaborn can convert figures into interactive web-based plots. Plots can be drawn in IPython Notebooks , edited with R or MATLAB, modified in a GUI, or embedded in apps and dashboards. Plotly is free for unlimited sharing, and has offline, or on-premise accounts for private use.
Lux#
Lux is a Python library that facilitates fast and easy experimentation with data by automating the visual data exploration process. To use Lux, simply add an extra import alongside pandas:
import lux
import pandas as pd
df = pd.read_csv("data.csv")
df # discover interesting insights!
By printing out a dataframe, Lux automatically recommends a set of visualizations that highlights interesting trends and patterns in the dataframe. Users can leverage any existing pandas commands without modifying their code, while being able to visualize their pandas data structures (e.g., DataFrame, Series, Index) at the same time. Lux also offers a powerful, intuitive language that allow users to create Altair, matplotlib, or Vega-Lite visualizations without having to think at the level of code.
Qtpandas#
Spun off from the main pandas library, the qtpandas
library enables DataFrame visualization and manipulation in PyQt4 and PySide applications.
D-Tale#
D-Tale is a lightweight web client for visualizing pandas data structures. It
provides a rich spreadsheet-style grid which acts as a wrapper for a lot of
pandas functionality (query, sort, describe, corr…) so users can quickly
manipulate their data. There is also an interactive chart-builder using Plotly
Dash allowing users to build nice portable visualizations. D-Tale can be
invoked with the following command
import dtale
dtale.show(df)
D-Tale integrates seamlessly with Jupyter notebooks, Python terminals, Kaggle
& Google Colab. Here are some demos of the grid.
hvplot#
hvPlot is a high-level plotting API for the PyData ecosystem built on HoloViews.
It can be loaded as a native pandas plotting backend via
pd.set_option("plotting.backend", "hvplot")
IDE#
IPython#
IPython is an interactive command shell and distributed computing
environment. IPython tab completion works with pandas methods and also
attributes like DataFrame columns.
Jupyter Notebook / Jupyter Lab#
Jupyter Notebook is a web application for creating Jupyter notebooks.
A Jupyter notebook is a JSON document containing an ordered list
of input/output cells which can contain code, text, mathematics, plots
and rich media.
Jupyter notebooks can be converted to a number of open standard output formats
(HTML, HTML presentation slides, LaTeX, PDF, ReStructuredText, Markdown,
Python) through ‘Download As’ in the web interface and jupyter convert
in a shell.
pandas DataFrames implement _repr_html_ and _repr_latex methods
which are utilized by Jupyter Notebook for displaying
(abbreviated) HTML or LaTeX tables. LaTeX output is properly escaped.
(Note: HTML tables may or may not be
compatible with non-HTML Jupyter output formats.)
See Options and Settings and
Available Options
for pandas display. settings.
Quantopian/qgrid#
qgrid is “an interactive grid for sorting and filtering
DataFrames in IPython Notebook” built with SlickGrid.
Spyder#
Spyder is a cross-platform PyQt-based IDE combining the editing, analysis,
debugging and profiling functionality of a software development tool with the
data exploration, interactive execution, deep inspection and rich visualization
capabilities of a scientific environment like MATLAB or Rstudio.
Its Variable Explorer
allows users to view, manipulate and edit pandas Index, Series,
and DataFrame objects like a “spreadsheet”, including copying and modifying
values, sorting, displaying a “heatmap”, converting data types and more.
pandas objects can also be renamed, duplicated, new columns added,
copied/pasted to/from the clipboard (as TSV), and saved/loaded to/from a file.
Spyder can also import data from a variety of plain text and binary files
or the clipboard into a new pandas DataFrame via a sophisticated import wizard.
Most pandas classes, methods and data attributes can be autocompleted in
Spyder’s Editor and
IPython Console,
and Spyder’s Help pane can retrieve
and render Numpydoc documentation on pandas objects in rich text with Sphinx
both automatically and on-demand.
API#
pandas-datareader#
pandas-datareader is a remote data access library for pandas (PyPI:pandas-datareader).
It is based on functionality that was located in pandas.io.data and pandas.io.wb but was
split off in v0.19.
See more in the pandas-datareader docs:
The following data feeds are available:
Google Finance
Tiingo
Morningstar
IEX
Robinhood
Enigma
Quandl
FRED
Fama/French
World Bank
OECD
Eurostat
TSP Fund Data
Nasdaq Trader Symbol Definitions
Stooq Index Data
MOEX Data
Quandl/Python#
Quandl API for Python wraps the Quandl REST API to return
pandas DataFrames with timeseries indexes.
Pydatastream#
PyDatastream is a Python interface to the
Refinitiv Datastream (DWS)
REST API to return indexed pandas DataFrames with financial data.
This package requires valid credentials for this API (non free).
pandaSDMX#
pandaSDMX is a library to retrieve and acquire statistical data
and metadata disseminated in
SDMX 2.1, an ISO-standard
widely used by institutions such as statistics offices, central banks,
and international organisations. pandaSDMX can expose datasets and related
structural metadata including data flows, code-lists,
and data structure definitions as pandas Series
or MultiIndexed DataFrames.
fredapi#
fredapi is a Python interface to the Federal Reserve Economic Data (FRED)
provided by the Federal Reserve Bank of St. Louis. It works with both the FRED database and ALFRED database that
contains point-in-time data (i.e. historic data revisions). fredapi provides a wrapper in Python to the FRED
HTTP API, and also provides several convenient methods for parsing and analyzing point-in-time data from ALFRED.
fredapi makes use of pandas and returns data in a Series or DataFrame. This module requires a FRED API key that
you can obtain for free on the FRED website.
dataframe_sql#
dataframe_sql is a Python package that translates SQL syntax directly into
operations on pandas DataFrames. This is useful when migrating from a database to
using pandas or for users more comfortable with SQL looking for a way to interface
with pandas.
Domain specific#
Geopandas#
Geopandas extends pandas data objects to include geographic information which support
geometric operations. If your work entails maps and geographical coordinates, and
you love pandas, you should take a close look at Geopandas.
staircase#
staircase is a data analysis package, built upon pandas and numpy, for modelling and
manipulation of mathematical step functions. It provides a rich variety of arithmetic
operations, relational operations, logical operations, statistical operations and
aggregations for step functions defined over real numbers, datetime and timedelta domains.
xarray#
xarray brings the labeled data power of pandas to the physical sciences by
providing N-dimensional variants of the core pandas data structures. It aims to
provide a pandas-like and pandas-compatible toolkit for analytics on multi-
dimensional arrays, rather than the tabular data for which pandas excels.
IO#
BCPandas#
BCPandas provides high performance writes from pandas to Microsoft SQL Server,
far exceeding the performance of the native df.to_sql method. Internally, it uses
Microsoft’s BCP utility, but the complexity is fully abstracted away from the end user.
Rigorously tested, it is a complete replacement for df.to_sql.
Deltalake#
Deltalake python package lets you access tables stored in
Delta Lake natively in Python without the need to use Spark or
JVM. It provides the delta_table.to_pyarrow_table().to_pandas() method to convert
any Delta table into Pandas dataframe.
Out-of-core#
Blaze#
Blaze provides a standard API for doing computations with various
in-memory and on-disk backends: NumPy, pandas, SQLAlchemy, MongoDB, PyTables,
PySpark.
Cylon#
Cylon is a fast, scalable, distributed memory parallel runtime with a pandas
like Python DataFrame API. ”Core Cylon” is implemented with C++ using Apache
Arrow format to represent the data in-memory. Cylon DataFrame API implements
most of the core operators of pandas such as merge, filter, join, concat,
group-by, drop_duplicates, etc. These operators are designed to work across
thousands of cores to scale applications. It can interoperate with pandas
DataFrame by reading data from pandas or converting data to pandas so users
can selectively scale parts of their pandas DataFrame applications.
from pycylon import read_csv, DataFrame, CylonEnv
from pycylon.net import MPIConfig
# Initialize Cylon distributed environment
config: MPIConfig = MPIConfig()
env: CylonEnv = CylonEnv(config=config, distributed=True)
df1: DataFrame = read_csv('/tmp/csv1.csv')
df2: DataFrame = read_csv('/tmp/csv2.csv')
# Using 1000s of cores across the cluster to compute the join
df3: Table = df1.join(other=df2, on=[0], algorithm="hash", env=env)
print(df3)
Dask#
Dask is a flexible parallel computing library for analytics. Dask
provides a familiar DataFrame interface for out-of-core, parallel and distributed computing.
Dask-ML#
Dask-ML enables parallel and distributed machine learning using Dask alongside existing machine learning libraries like Scikit-Learn, XGBoost, and TensorFlow.
Ibis#
Ibis offers a standard way to write analytics code, that can be run in multiple engines. It helps in bridging the gap between local Python environments (like pandas) and remote storage and execution systems like Hadoop components (like HDFS, Impala, Hive, Spark) and SQL databases (Postgres, etc.).
Koalas#
Koalas provides a familiar pandas DataFrame interface on top of Apache Spark. It enables users to leverage multi-cores on one machine or a cluster of machines to speed up or scale their DataFrame code.
Modin#
The modin.pandas DataFrame is a parallel and distributed drop-in replacement
for pandas. This means that you can use Modin with existing pandas code or write
new code with the existing pandas API. Modin can leverage your entire machine or
cluster to speed up and scale your pandas workloads, including traditionally
time-consuming tasks like ingesting data (read_csv, read_excel,
read_parquet, etc.).
# import pandas as pd
import modin.pandas as pd
df = pd.read_csv("big.csv") # use all your cores!
Odo#
Odo provides a uniform API for moving data between different formats. It uses
pandas own read_csv for CSV IO and leverages many existing packages such as
PyTables, h5py, and pymongo to move data between non pandas formats. Its graph
based approach is also extensible by end users for custom formats that may be
too specific for the core of odo.
Pandarallel#
Pandarallel provides a simple way to parallelize your pandas operations on all your CPUs by changing only one line of code.
If also displays progress bars.
from pandarallel import pandarallel
pandarallel.initialize(progress_bar=True)
# df.apply(func)
df.parallel_apply(func)
Vaex#
Increasingly, packages are being built on top of pandas to address specific needs in data preparation, analysis and visualization. Vaex is a Python library for Out-of-Core DataFrames (similar to pandas), to visualize and explore big tabular datasets. It can calculate statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid up to a billion (109) objects/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).
vaex.from_pandas
vaex.to_pandas_df
Extension data types#
pandas provides an interface for defining
extension types to extend NumPy’s type
system. The following libraries implement that interface to provide types not
found in NumPy or pandas, which work well with pandas’ data containers.
Cyberpandas#
Cyberpandas provides an extension type for storing arrays of IP Addresses. These
arrays can be stored inside pandas’ Series and DataFrame.
Pandas-Genomics#
Pandas-Genomics provides extension types, extension arrays, and extension accessors for working with genomics data
Pint-Pandas#
Pint-Pandas provides an extension type for
storing numeric arrays with units. These arrays can be stored inside pandas’
Series and DataFrame. Operations between Series and DataFrame columns which
use pint’s extension array are then units aware.
Text Extensions for Pandas#
Text Extensions for Pandas
provides extension types to cover common data structures for representing natural language
data, plus library integrations that convert the outputs of popular natural language
processing libraries into Pandas DataFrames.
Accessors#
A directory of projects providing
extension accessors. This is for users to
discover new accessors and for library authors to coordinate on the namespace.
Library
Accessor
Classes
Description
cyberpandas
ip
Series
Provides common operations for working with IP addresses.
pdvega
vgplot
Series, DataFrame
Provides plotting functions from the Altair library.
pandas-genomics
genomics
Series, DataFrame
Provides common operations for quality control and analysis of genomics data.
pandas_path
path
Index, Series
Provides pathlib.Path functions for Series.
pint-pandas
pint
Series, DataFrame
Provides units support for numeric Series and DataFrames.
composeml
slice
DataFrame
Provides a generator for enhanced data slicing.
datatest
validate
Series, DataFrame, Index
Provides validation, differences, and acceptance managers.
woodwork
ww
Series, DataFrame
Provides physical, logical, and semantic data typing information for Series and DataFrames.
staircase
sc
Series
Provides methods for querying, aggregating and plotting step functions
Development tools#
pandas-stubs#
While pandas repository is partially typed, the package itself doesn’t expose this information for external use.
Install pandas-stubs to enable basic type coverage of pandas API.
Learn more by reading through GH14468, GH26766, GH28142.
See installation and usage instructions on the github page.
| ecosystem.html |
pandas.Series.plot.kde | `pandas.Series.plot.kde`
Generate Kernel Density Estimate plot using Gaussian kernels.
```
>>> s = pd.Series([1, 2, 2.5, 3, 3.5, 4, 5])
>>> ax = s.plot.kde()
``` | Series.plot.kde(bw_method=None, ind=None, **kwargs)[source]#
Generate Kernel Density Estimate plot using Gaussian kernels.
In statistics, kernel density estimation (KDE) is a non-parametric
way to estimate the probability density function (PDF) of a random
variable. This function uses Gaussian kernels and includes automatic
bandwidth determination.
Parameters
bw_methodstr, scalar or callable, optionalThe method used to calculate the estimator bandwidth. This can be
‘scott’, ‘silverman’, a scalar constant or a callable.
If None (default), ‘scott’ is used.
See scipy.stats.gaussian_kde for more information.
indNumPy array or int, optionalEvaluation points for the estimated PDF. If None (default),
1000 equally spaced points are used. If ind is a NumPy array, the
KDE is evaluated at the points passed. If ind is an integer,
ind number of equally spaced points are used.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
matplotlib.axes.Axes or numpy.ndarray of them
See also
scipy.stats.gaussian_kdeRepresentation of a kernel-density estimate using Gaussian kernels. This is the function used internally to estimate the PDF.
Examples
Given a Series of points randomly sampled from an unknown
distribution, estimate its PDF using KDE with automatic
bandwidth determination and plot the results, evaluating them at
1000 equally spaced points (default):
>>> s = pd.Series([1, 2, 2.5, 3, 3.5, 4, 5])
>>> ax = s.plot.kde()
A scalar bandwidth can be specified. Using a small bandwidth value can
lead to over-fitting, while using a large bandwidth value may result
in under-fitting:
>>> ax = s.plot.kde(bw_method=0.3)
>>> ax = s.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the
plot of the estimated PDF:
>>> ax = s.plot.kde(ind=[1, 2, 3, 4, 5])
For DataFrame, it works in the same way:
>>> df = pd.DataFrame({
... 'x': [1, 2, 2.5, 3, 3.5, 4, 5],
... 'y': [4, 4, 4.5, 5, 5.5, 6, 6],
... })
>>> ax = df.plot.kde()
A scalar bandwidth can be specified. Using a small bandwidth value can
lead to over-fitting, while using a large bandwidth value may result
in under-fitting:
>>> ax = df.plot.kde(bw_method=0.3)
>>> ax = df.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the
plot of the estimated PDF:
>>> ax = df.plot.kde(ind=[1, 2, 3, 4, 5, 6])
| reference/api/pandas.Series.plot.kde.html |
Package overview | Package overview
pandas is a Python package providing fast,
flexible, and expressive data structures designed to make working with
“relational” or “labeled” data both easy and intuitive. It aims to be the
fundamental high-level building block for doing practical, real-world data
analysis in Python. Additionally, it has the broader goal of becoming the
most powerful and flexible open source data analysis/manipulation tool
available in any language. It is already well on its way toward this goal.
pandas is well suited for many different kinds of data:
Tabular data with heterogeneously-typed columns, as in an SQL table or
Excel spreadsheet
Ordered and unordered (not necessarily fixed-frequency) time series data.
Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
column labels | pandas is a Python package providing fast,
flexible, and expressive data structures designed to make working with
“relational” or “labeled” data both easy and intuitive. It aims to be the
fundamental high-level building block for doing practical, real-world data
analysis in Python. Additionally, it has the broader goal of becoming the
most powerful and flexible open source data analysis/manipulation tool
available in any language. It is already well on its way toward this goal.
pandas is well suited for many different kinds of data:
Tabular data with heterogeneously-typed columns, as in an SQL table or
Excel spreadsheet
Ordered and unordered (not necessarily fixed-frequency) time series data.
Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
column labels
Any other form of observational / statistical data sets. The data
need not be labeled at all to be placed into a pandas data structure
The two primary data structures of pandas, Series (1-dimensional)
and DataFrame (2-dimensional), handle the vast majority of typical use
cases in finance, statistics, social science, and many areas of
engineering. For R users, DataFrame provides everything that R’s
data.frame provides and much more. pandas is built on top of NumPy and is intended to integrate well within a scientific
computing environment with many other 3rd party libraries.
Here are just a few of the things that pandas does well:
Easy handling of missing data (represented as NaN) in floating point as
well as non-floating point data
Size mutability: columns can be inserted and deleted from DataFrame and
higher dimensional objects
Automatic and explicit data alignment: objects can be explicitly
aligned to a set of labels, or the user can simply ignore the labels and
let Series, DataFrame, etc. automatically align the data for you in
computations
Powerful, flexible group by functionality to perform
split-apply-combine operations on data sets, for both aggregating and
transforming data
Make it easy to convert ragged, differently-indexed data in other
Python and NumPy data structures into DataFrame objects
Intelligent label-based slicing, fancy indexing, and subsetting
of large data sets
Intuitive merging and joining data sets
Flexible reshaping and pivoting of data sets
Hierarchical labeling of axes (possible to have multiple labels per
tick)
Robust IO tools for loading data from flat files (CSV and delimited),
Excel files, databases, and saving / loading data from the ultrafast HDF5
format
Time series-specific functionality: date range generation and frequency
conversion, moving window statistics, date shifting, and lagging.
Many of these principles are here to address the shortcomings frequently
experienced using other languages / scientific research environments. For data
scientists, working with data is typically divided into multiple stages:
munging and cleaning data, analyzing / modeling it, then organizing the results
of the analysis into a form suitable for plotting or tabular display. pandas
is the ideal tool for all of these tasks.
Some other notes
pandas is fast. Many of the low-level algorithmic bits have been
extensively tweaked in Cython code. However, as with
anything else generalization usually sacrifices performance. So if you focus
on one feature for your application you may be able to create a faster
specialized tool.
pandas is a dependency of statsmodels, making it an important part of the
statistical computing ecosystem in Python.
pandas has been used extensively in production in financial applications.
Data structures#
Dimensions
Name
Description
1
Series
1D labeled homogeneously-typed array
2
DataFrame
General 2D labeled, size-mutable tabular structure with potentially heterogeneously-typed column
Why more than one data structure?#
The best way to think about the pandas data structures is as flexible
containers for lower dimensional data. For example, DataFrame is a container
for Series, and Series is a container for scalars. We would like to be
able to insert and remove objects from these containers in a dictionary-like
fashion.
Also, we would like sensible default behaviors for the common API functions
which take into account the typical orientation of time series and
cross-sectional data sets. When using the N-dimensional array (ndarrays) to store 2- and 3-dimensional
data, a burden is placed on the user to consider the orientation of the data
set when writing functions; axes are considered more or less equivalent (except
when C- or Fortran-contiguousness matters for performance). In pandas, the axes
are intended to lend more semantic meaning to the data; i.e., for a particular
data set, there is likely to be a “right” way to orient the data. The goal,
then, is to reduce the amount of mental effort required to code up data
transformations in downstream functions.
For example, with tabular data (DataFrame) it is more semantically helpful to
think of the index (the rows) and the columns rather than axis 0 and
axis 1. Iterating through the columns of the DataFrame thus results in more
readable code:
for col in df.columns:
series = df[col]
# do something with series
Mutability and copying of data#
All pandas data structures are value-mutable (the values they contain can be
altered) but not always size-mutable. The length of a Series cannot be
changed, but, for example, columns can be inserted into a DataFrame. However,
the vast majority of methods produce new objects and leave the input data
untouched. In general we like to favor immutability where sensible.
Getting support#
The first stop for pandas issues and ideas is the Github Issue Tracker. If you have a general question,
pandas community experts can answer through Stack Overflow.
Community#
pandas is actively supported today by a community of like-minded individuals around
the world who contribute their valuable time and energy to help make open source
pandas possible. Thanks to all of our contributors.
If you’re interested in contributing, please visit the contributing guide.
pandas is a NumFOCUS sponsored project.
This will help ensure the success of the development of pandas as a world-class open-source
project and makes it possible to donate to the project.
Project governance#
The governance process that pandas project has used informally since its inception in 2008 is formalized in Project Governance documents.
The documents clarify how decisions are made and how the various elements of our community interact, including the relationship between open source collaborative development and work that may be funded by for-profit or non-profit entities.
Wes McKinney is the Benevolent Dictator for Life (BDFL).
Development team#
The list of the Core Team members and more detailed information can be found on the people’s page of the governance repo.
Institutional partners#
The information about current institutional partners can be found on pandas website page.
License#
BSD 3-Clause License
Copyright (c) 2008-2011, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData Development Team
All rights reserved.
Copyright (c) 2011-2022, Open source contributors.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| getting_started/overview.html |
pandas.tseries.offsets.DateOffset.is_anchored | `pandas.tseries.offsets.DateOffset.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | DateOffset.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.DateOffset.is_anchored.html |
pandas.tseries.offsets.SemiMonthEnd.day_of_month | pandas.tseries.offsets.SemiMonthEnd.day_of_month | SemiMonthEnd.day_of_month#
| reference/api/pandas.tseries.offsets.SemiMonthEnd.day_of_month.html |
pandas.DatetimeIndex.tz | `pandas.DatetimeIndex.tz`
Return the timezone.
Returns None when the array is tz-naive. | property DatetimeIndex.tz[source]#
Return the timezone.
Returns
datetime.tzinfo, pytz.tzinfo.BaseTZInfo, dateutil.tz.tz.tzfile, or NoneReturns None when the array is tz-naive.
| reference/api/pandas.DatetimeIndex.tz.html |
Date offsets | DateOffset#
DateOffset
Standard kind of date increment used for a date range.
Properties#
DateOffset.freqstr
Return a string representing the frequency.
DateOffset.kwds
Return a dict of extra parameters for the offset.
DateOffset.name
Return a string representing the base frequency.
DateOffset.nanos
DateOffset.normalize
DateOffset.rule_code
DateOffset.n
DateOffset.is_month_start
Return boolean whether a timestamp occurs on the month start.
DateOffset.is_month_end
Return boolean whether a timestamp occurs on the month end.
Methods#
DateOffset.apply
DateOffset.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
DateOffset.copy
Return a copy of the frequency.
DateOffset.isAnchored
DateOffset.onOffset
DateOffset.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
DateOffset.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
DateOffset.__call__(*args, **kwargs)
Call self as a function.
DateOffset.is_month_start
Return boolean whether a timestamp occurs on the month start.
DateOffset.is_month_end
Return boolean whether a timestamp occurs on the month end.
DateOffset.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
DateOffset.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
DateOffset.is_year_start
Return boolean whether a timestamp occurs on the year start.
DateOffset.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessDay#
BusinessDay
DateOffset subclass representing possibly n business days.
Alias:
BDay
alias of pandas._libs.tslibs.offsets.BusinessDay
Properties#
BusinessDay.freqstr
Return a string representing the frequency.
BusinessDay.kwds
Return a dict of extra parameters for the offset.
BusinessDay.name
Return a string representing the base frequency.
BusinessDay.nanos
BusinessDay.normalize
BusinessDay.rule_code
BusinessDay.n
BusinessDay.weekmask
BusinessDay.holidays
BusinessDay.calendar
Methods#
BusinessDay.apply
BusinessDay.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessDay.copy
Return a copy of the frequency.
BusinessDay.isAnchored
BusinessDay.onOffset
BusinessDay.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessDay.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessDay.__call__(*args, **kwargs)
Call self as a function.
BusinessDay.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessDay.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessDay.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessDay.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessDay.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessDay.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessHour#
BusinessHour
DateOffset subclass representing possibly n business hours.
Properties#
BusinessHour.freqstr
Return a string representing the frequency.
BusinessHour.kwds
Return a dict of extra parameters for the offset.
BusinessHour.name
Return a string representing the base frequency.
BusinessHour.nanos
BusinessHour.normalize
BusinessHour.rule_code
BusinessHour.n
BusinessHour.start
BusinessHour.end
BusinessHour.weekmask
BusinessHour.holidays
BusinessHour.calendar
Methods#
BusinessHour.apply
BusinessHour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessHour.copy
Return a copy of the frequency.
BusinessHour.isAnchored
BusinessHour.onOffset
BusinessHour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessHour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessHour.__call__(*args, **kwargs)
Call self as a function.
BusinessHour.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessHour.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessHour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessHour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessHour.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessHour.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessDay#
CustomBusinessDay
DateOffset subclass representing custom business days excluding holidays.
Alias:
CDay
alias of pandas._libs.tslibs.offsets.CustomBusinessDay
Properties#
CustomBusinessDay.freqstr
Return a string representing the frequency.
CustomBusinessDay.kwds
Return a dict of extra parameters for the offset.
CustomBusinessDay.name
Return a string representing the base frequency.
CustomBusinessDay.nanos
CustomBusinessDay.normalize
CustomBusinessDay.rule_code
CustomBusinessDay.n
CustomBusinessDay.weekmask
CustomBusinessDay.calendar
CustomBusinessDay.holidays
Methods#
CustomBusinessDay.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessDay.apply
CustomBusinessDay.copy
Return a copy of the frequency.
CustomBusinessDay.isAnchored
CustomBusinessDay.onOffset
CustomBusinessDay.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessDay.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessDay.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessDay.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessDay.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessDay.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessDay.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessDay.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessDay.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessHour#
CustomBusinessHour
DateOffset subclass representing possibly n custom business days.
Properties#
CustomBusinessHour.freqstr
Return a string representing the frequency.
CustomBusinessHour.kwds
Return a dict of extra parameters for the offset.
CustomBusinessHour.name
Return a string representing the base frequency.
CustomBusinessHour.nanos
CustomBusinessHour.normalize
CustomBusinessHour.rule_code
CustomBusinessHour.n
CustomBusinessHour.weekmask
CustomBusinessHour.calendar
CustomBusinessHour.holidays
CustomBusinessHour.start
CustomBusinessHour.end
Methods#
CustomBusinessHour.apply
CustomBusinessHour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessHour.copy
Return a copy of the frequency.
CustomBusinessHour.isAnchored
CustomBusinessHour.onOffset
CustomBusinessHour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessHour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessHour.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessHour.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessHour.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessHour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessHour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessHour.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessHour.is_year_end
Return boolean whether a timestamp occurs on the year end.
MonthEnd#
MonthEnd
DateOffset of one month end.
Properties#
MonthEnd.freqstr
Return a string representing the frequency.
MonthEnd.kwds
Return a dict of extra parameters for the offset.
MonthEnd.name
Return a string representing the base frequency.
MonthEnd.nanos
MonthEnd.normalize
MonthEnd.rule_code
MonthEnd.n
Methods#
MonthEnd.apply
MonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
MonthEnd.copy
Return a copy of the frequency.
MonthEnd.isAnchored
MonthEnd.onOffset
MonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
MonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
MonthEnd.__call__(*args, **kwargs)
Call self as a function.
MonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
MonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
MonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
MonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
MonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
MonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
MonthBegin#
MonthBegin
DateOffset of one month at beginning.
Properties#
MonthBegin.freqstr
Return a string representing the frequency.
MonthBegin.kwds
Return a dict of extra parameters for the offset.
MonthBegin.name
Return a string representing the base frequency.
MonthBegin.nanos
MonthBegin.normalize
MonthBegin.rule_code
MonthBegin.n
Methods#
MonthBegin.apply
MonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
MonthBegin.copy
Return a copy of the frequency.
MonthBegin.isAnchored
MonthBegin.onOffset
MonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
MonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
MonthBegin.__call__(*args, **kwargs)
Call self as a function.
MonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
MonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
MonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
MonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
MonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
MonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessMonthEnd#
BusinessMonthEnd
DateOffset increments between the last business day of the month.
Alias:
BMonthEnd
alias of pandas._libs.tslibs.offsets.BusinessMonthEnd
Properties#
BusinessMonthEnd.freqstr
Return a string representing the frequency.
BusinessMonthEnd.kwds
Return a dict of extra parameters for the offset.
BusinessMonthEnd.name
Return a string representing the base frequency.
BusinessMonthEnd.nanos
BusinessMonthEnd.normalize
BusinessMonthEnd.rule_code
BusinessMonthEnd.n
Methods#
BusinessMonthEnd.apply
BusinessMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessMonthEnd.copy
Return a copy of the frequency.
BusinessMonthEnd.isAnchored
BusinessMonthEnd.onOffset
BusinessMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessMonthEnd.__call__(*args, **kwargs)
Call self as a function.
BusinessMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessMonthBegin#
BusinessMonthBegin
DateOffset of one month at the first business day.
Alias:
BMonthBegin
alias of pandas._libs.tslibs.offsets.BusinessMonthBegin
Properties#
BusinessMonthBegin.freqstr
Return a string representing the frequency.
BusinessMonthBegin.kwds
Return a dict of extra parameters for the offset.
BusinessMonthBegin.name
Return a string representing the base frequency.
BusinessMonthBegin.nanos
BusinessMonthBegin.normalize
BusinessMonthBegin.rule_code
BusinessMonthBegin.n
Methods#
BusinessMonthBegin.apply
BusinessMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessMonthBegin.copy
Return a copy of the frequency.
BusinessMonthBegin.isAnchored
BusinessMonthBegin.onOffset
BusinessMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessMonthBegin.__call__(*args, **kwargs)
Call self as a function.
BusinessMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessMonthEnd#
CustomBusinessMonthEnd
Attributes
Alias:
CBMonthEnd
alias of pandas._libs.tslibs.offsets.CustomBusinessMonthEnd
Properties#
CustomBusinessMonthEnd.freqstr
Return a string representing the frequency.
CustomBusinessMonthEnd.kwds
Return a dict of extra parameters for the offset.
CustomBusinessMonthEnd.m_offset
CustomBusinessMonthEnd.name
Return a string representing the base frequency.
CustomBusinessMonthEnd.nanos
CustomBusinessMonthEnd.normalize
CustomBusinessMonthEnd.rule_code
CustomBusinessMonthEnd.n
CustomBusinessMonthEnd.weekmask
CustomBusinessMonthEnd.calendar
CustomBusinessMonthEnd.holidays
Methods#
CustomBusinessMonthEnd.apply
CustomBusinessMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessMonthEnd.copy
Return a copy of the frequency.
CustomBusinessMonthEnd.isAnchored
CustomBusinessMonthEnd.onOffset
CustomBusinessMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessMonthEnd.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessMonthBegin#
CustomBusinessMonthBegin
Attributes
Alias:
CBMonthBegin
alias of pandas._libs.tslibs.offsets.CustomBusinessMonthBegin
Properties#
CustomBusinessMonthBegin.freqstr
Return a string representing the frequency.
CustomBusinessMonthBegin.kwds
Return a dict of extra parameters for the offset.
CustomBusinessMonthBegin.m_offset
CustomBusinessMonthBegin.name
Return a string representing the base frequency.
CustomBusinessMonthBegin.nanos
CustomBusinessMonthBegin.normalize
CustomBusinessMonthBegin.rule_code
CustomBusinessMonthBegin.n
CustomBusinessMonthBegin.weekmask
CustomBusinessMonthBegin.calendar
CustomBusinessMonthBegin.holidays
Methods#
CustomBusinessMonthBegin.apply
CustomBusinessMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessMonthBegin.copy
Return a copy of the frequency.
CustomBusinessMonthBegin.isAnchored
CustomBusinessMonthBegin.onOffset
CustomBusinessMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessMonthBegin.__call__(*args, ...)
Call self as a function.
CustomBusinessMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
SemiMonthEnd#
SemiMonthEnd
Two DateOffset's per month repeating on the last day of the month & day_of_month.
Properties#
SemiMonthEnd.freqstr
Return a string representing the frequency.
SemiMonthEnd.kwds
Return a dict of extra parameters for the offset.
SemiMonthEnd.name
Return a string representing the base frequency.
SemiMonthEnd.nanos
SemiMonthEnd.normalize
SemiMonthEnd.rule_code
SemiMonthEnd.n
SemiMonthEnd.day_of_month
Methods#
SemiMonthEnd.apply
SemiMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
SemiMonthEnd.copy
Return a copy of the frequency.
SemiMonthEnd.isAnchored
SemiMonthEnd.onOffset
SemiMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
SemiMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
SemiMonthEnd.__call__(*args, **kwargs)
Call self as a function.
SemiMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
SemiMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
SemiMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
SemiMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
SemiMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
SemiMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
SemiMonthBegin#
SemiMonthBegin
Two DateOffset's per month repeating on the first day of the month & day_of_month.
Properties#
SemiMonthBegin.freqstr
Return a string representing the frequency.
SemiMonthBegin.kwds
Return a dict of extra parameters for the offset.
SemiMonthBegin.name
Return a string representing the base frequency.
SemiMonthBegin.nanos
SemiMonthBegin.normalize
SemiMonthBegin.rule_code
SemiMonthBegin.n
SemiMonthBegin.day_of_month
Methods#
SemiMonthBegin.apply
SemiMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
SemiMonthBegin.copy
Return a copy of the frequency.
SemiMonthBegin.isAnchored
SemiMonthBegin.onOffset
SemiMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
SemiMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
SemiMonthBegin.__call__(*args, **kwargs)
Call self as a function.
SemiMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
SemiMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
SemiMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
SemiMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
SemiMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
SemiMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
Week#
Week
Weekly offset.
Properties#
Week.freqstr
Return a string representing the frequency.
Week.kwds
Return a dict of extra parameters for the offset.
Week.name
Return a string representing the base frequency.
Week.nanos
Week.normalize
Week.rule_code
Week.n
Week.weekday
Methods#
Week.apply
Week.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Week.copy
Return a copy of the frequency.
Week.isAnchored
Week.onOffset
Week.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Week.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Week.__call__(*args, **kwargs)
Call self as a function.
Week.is_month_start
Return boolean whether a timestamp occurs on the month start.
Week.is_month_end
Return boolean whether a timestamp occurs on the month end.
Week.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Week.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Week.is_year_start
Return boolean whether a timestamp occurs on the year start.
Week.is_year_end
Return boolean whether a timestamp occurs on the year end.
WeekOfMonth#
WeekOfMonth
Describes monthly dates like "the Tuesday of the 2nd week of each month".
Properties#
WeekOfMonth.freqstr
Return a string representing the frequency.
WeekOfMonth.kwds
Return a dict of extra parameters for the offset.
WeekOfMonth.name
Return a string representing the base frequency.
WeekOfMonth.nanos
WeekOfMonth.normalize
WeekOfMonth.rule_code
WeekOfMonth.n
WeekOfMonth.week
Methods#
WeekOfMonth.apply
WeekOfMonth.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
WeekOfMonth.copy
Return a copy of the frequency.
WeekOfMonth.isAnchored
WeekOfMonth.onOffset
WeekOfMonth.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
WeekOfMonth.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
WeekOfMonth.__call__(*args, **kwargs)
Call self as a function.
WeekOfMonth.weekday
WeekOfMonth.is_month_start
Return boolean whether a timestamp occurs on the month start.
WeekOfMonth.is_month_end
Return boolean whether a timestamp occurs on the month end.
WeekOfMonth.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
WeekOfMonth.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
WeekOfMonth.is_year_start
Return boolean whether a timestamp occurs on the year start.
WeekOfMonth.is_year_end
Return boolean whether a timestamp occurs on the year end.
LastWeekOfMonth#
LastWeekOfMonth
Describes monthly dates in last week of month.
Properties#
LastWeekOfMonth.freqstr
Return a string representing the frequency.
LastWeekOfMonth.kwds
Return a dict of extra parameters for the offset.
LastWeekOfMonth.name
Return a string representing the base frequency.
LastWeekOfMonth.nanos
LastWeekOfMonth.normalize
LastWeekOfMonth.rule_code
LastWeekOfMonth.n
LastWeekOfMonth.weekday
LastWeekOfMonth.week
Methods#
LastWeekOfMonth.apply
LastWeekOfMonth.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
LastWeekOfMonth.copy
Return a copy of the frequency.
LastWeekOfMonth.isAnchored
LastWeekOfMonth.onOffset
LastWeekOfMonth.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
LastWeekOfMonth.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
LastWeekOfMonth.__call__(*args, **kwargs)
Call self as a function.
LastWeekOfMonth.is_month_start
Return boolean whether a timestamp occurs on the month start.
LastWeekOfMonth.is_month_end
Return boolean whether a timestamp occurs on the month end.
LastWeekOfMonth.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
LastWeekOfMonth.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
LastWeekOfMonth.is_year_start
Return boolean whether a timestamp occurs on the year start.
LastWeekOfMonth.is_year_end
Return boolean whether a timestamp occurs on the year end.
BQuarterEnd#
BQuarterEnd
DateOffset increments between the last business day of each Quarter.
Properties#
BQuarterEnd.freqstr
Return a string representing the frequency.
BQuarterEnd.kwds
Return a dict of extra parameters for the offset.
BQuarterEnd.name
Return a string representing the base frequency.
BQuarterEnd.nanos
BQuarterEnd.normalize
BQuarterEnd.rule_code
BQuarterEnd.n
BQuarterEnd.startingMonth
Methods#
BQuarterEnd.apply
BQuarterEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BQuarterEnd.copy
Return a copy of the frequency.
BQuarterEnd.isAnchored
BQuarterEnd.onOffset
BQuarterEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BQuarterEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BQuarterEnd.__call__(*args, **kwargs)
Call self as a function.
BQuarterEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BQuarterEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BQuarterEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BQuarterEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BQuarterEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BQuarterEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BQuarterBegin#
BQuarterBegin
DateOffset increments between the first business day of each Quarter.
Properties#
BQuarterBegin.freqstr
Return a string representing the frequency.
BQuarterBegin.kwds
Return a dict of extra parameters for the offset.
BQuarterBegin.name
Return a string representing the base frequency.
BQuarterBegin.nanos
BQuarterBegin.normalize
BQuarterBegin.rule_code
BQuarterBegin.n
BQuarterBegin.startingMonth
Methods#
BQuarterBegin.apply
BQuarterBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BQuarterBegin.copy
Return a copy of the frequency.
BQuarterBegin.isAnchored
BQuarterBegin.onOffset
BQuarterBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BQuarterBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BQuarterBegin.__call__(*args, **kwargs)
Call self as a function.
BQuarterBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BQuarterBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BQuarterBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BQuarterBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BQuarterBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BQuarterBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
QuarterEnd#
QuarterEnd
DateOffset increments between Quarter end dates.
Properties#
QuarterEnd.freqstr
Return a string representing the frequency.
QuarterEnd.kwds
Return a dict of extra parameters for the offset.
QuarterEnd.name
Return a string representing the base frequency.
QuarterEnd.nanos
QuarterEnd.normalize
QuarterEnd.rule_code
QuarterEnd.n
QuarterEnd.startingMonth
Methods#
QuarterEnd.apply
QuarterEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
QuarterEnd.copy
Return a copy of the frequency.
QuarterEnd.isAnchored
QuarterEnd.onOffset
QuarterEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
QuarterEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
QuarterEnd.__call__(*args, **kwargs)
Call self as a function.
QuarterEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
QuarterEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
QuarterEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
QuarterEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
QuarterEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
QuarterEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
QuarterBegin#
QuarterBegin
DateOffset increments between Quarter start dates.
Properties#
QuarterBegin.freqstr
Return a string representing the frequency.
QuarterBegin.kwds
Return a dict of extra parameters for the offset.
QuarterBegin.name
Return a string representing the base frequency.
QuarterBegin.nanos
QuarterBegin.normalize
QuarterBegin.rule_code
QuarterBegin.n
QuarterBegin.startingMonth
Methods#
QuarterBegin.apply
QuarterBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
QuarterBegin.copy
Return a copy of the frequency.
QuarterBegin.isAnchored
QuarterBegin.onOffset
QuarterBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
QuarterBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
QuarterBegin.__call__(*args, **kwargs)
Call self as a function.
QuarterBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
QuarterBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
QuarterBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
QuarterBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
QuarterBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
QuarterBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
BYearEnd#
BYearEnd
DateOffset increments between the last business day of the year.
Properties#
BYearEnd.freqstr
Return a string representing the frequency.
BYearEnd.kwds
Return a dict of extra parameters for the offset.
BYearEnd.name
Return a string representing the base frequency.
BYearEnd.nanos
BYearEnd.normalize
BYearEnd.rule_code
BYearEnd.n
BYearEnd.month
Methods#
BYearEnd.apply
BYearEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BYearEnd.copy
Return a copy of the frequency.
BYearEnd.isAnchored
BYearEnd.onOffset
BYearEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BYearEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BYearEnd.__call__(*args, **kwargs)
Call self as a function.
BYearEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BYearEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BYearEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BYearEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BYearEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BYearEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BYearBegin#
BYearBegin
DateOffset increments between the first business day of the year.
Properties#
BYearBegin.freqstr
Return a string representing the frequency.
BYearBegin.kwds
Return a dict of extra parameters for the offset.
BYearBegin.name
Return a string representing the base frequency.
BYearBegin.nanos
BYearBegin.normalize
BYearBegin.rule_code
BYearBegin.n
BYearBegin.month
Methods#
BYearBegin.apply
BYearBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BYearBegin.copy
Return a copy of the frequency.
BYearBegin.isAnchored
BYearBegin.onOffset
BYearBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BYearBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BYearBegin.__call__(*args, **kwargs)
Call self as a function.
BYearBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BYearBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BYearBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BYearBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BYearBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BYearBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
YearEnd#
YearEnd
DateOffset increments between calendar year ends.
Properties#
YearEnd.freqstr
Return a string representing the frequency.
YearEnd.kwds
Return a dict of extra parameters for the offset.
YearEnd.name
Return a string representing the base frequency.
YearEnd.nanos
YearEnd.normalize
YearEnd.rule_code
YearEnd.n
YearEnd.month
Methods#
YearEnd.apply
YearEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
YearEnd.copy
Return a copy of the frequency.
YearEnd.isAnchored
YearEnd.onOffset
YearEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
YearEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
YearEnd.__call__(*args, **kwargs)
Call self as a function.
YearEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
YearEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
YearEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
YearEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
YearEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
YearEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
YearBegin#
YearBegin
DateOffset increments between calendar year begin dates.
Properties#
YearBegin.freqstr
Return a string representing the frequency.
YearBegin.kwds
Return a dict of extra parameters for the offset.
YearBegin.name
Return a string representing the base frequency.
YearBegin.nanos
YearBegin.normalize
YearBegin.rule_code
YearBegin.n
YearBegin.month
Methods#
YearBegin.apply
YearBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
YearBegin.copy
Return a copy of the frequency.
YearBegin.isAnchored
YearBegin.onOffset
YearBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
YearBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
YearBegin.__call__(*args, **kwargs)
Call self as a function.
YearBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
YearBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
YearBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
YearBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
YearBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
YearBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
FY5253#
FY5253
Describes 52-53 week fiscal year.
Properties#
FY5253.freqstr
Return a string representing the frequency.
FY5253.kwds
Return a dict of extra parameters for the offset.
FY5253.name
Return a string representing the base frequency.
FY5253.nanos
FY5253.normalize
FY5253.rule_code
FY5253.n
FY5253.startingMonth
FY5253.variation
FY5253.weekday
Methods#
FY5253.apply
FY5253.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
FY5253.copy
Return a copy of the frequency.
FY5253.get_rule_code_suffix
FY5253.get_year_end
FY5253.isAnchored
FY5253.onOffset
FY5253.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
FY5253.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
FY5253.__call__(*args, **kwargs)
Call self as a function.
FY5253.is_month_start
Return boolean whether a timestamp occurs on the month start.
FY5253.is_month_end
Return boolean whether a timestamp occurs on the month end.
FY5253.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
FY5253.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
FY5253.is_year_start
Return boolean whether a timestamp occurs on the year start.
FY5253.is_year_end
Return boolean whether a timestamp occurs on the year end.
FY5253Quarter#
FY5253Quarter
DateOffset increments between business quarter dates for 52-53 week fiscal year.
Properties#
FY5253Quarter.freqstr
Return a string representing the frequency.
FY5253Quarter.kwds
Return a dict of extra parameters for the offset.
FY5253Quarter.name
Return a string representing the base frequency.
FY5253Quarter.nanos
FY5253Quarter.normalize
FY5253Quarter.rule_code
FY5253Quarter.n
FY5253Quarter.qtr_with_extra_week
FY5253Quarter.startingMonth
FY5253Quarter.variation
FY5253Quarter.weekday
Methods#
FY5253Quarter.apply
FY5253Quarter.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
FY5253Quarter.copy
Return a copy of the frequency.
FY5253Quarter.get_rule_code_suffix
FY5253Quarter.get_weeks
FY5253Quarter.isAnchored
FY5253Quarter.onOffset
FY5253Quarter.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
FY5253Quarter.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
FY5253Quarter.year_has_extra_week
FY5253Quarter.__call__(*args, **kwargs)
Call self as a function.
FY5253Quarter.is_month_start
Return boolean whether a timestamp occurs on the month start.
FY5253Quarter.is_month_end
Return boolean whether a timestamp occurs on the month end.
FY5253Quarter.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
FY5253Quarter.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
FY5253Quarter.is_year_start
Return boolean whether a timestamp occurs on the year start.
FY5253Quarter.is_year_end
Return boolean whether a timestamp occurs on the year end.
Easter#
Easter
DateOffset for the Easter holiday using logic defined in dateutil.
Properties#
Easter.freqstr
Return a string representing the frequency.
Easter.kwds
Return a dict of extra parameters for the offset.
Easter.name
Return a string representing the base frequency.
Easter.nanos
Easter.normalize
Easter.rule_code
Easter.n
Methods#
Easter.apply
Easter.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Easter.copy
Return a copy of the frequency.
Easter.isAnchored
Easter.onOffset
Easter.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Easter.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Easter.__call__(*args, **kwargs)
Call self as a function.
Easter.is_month_start
Return boolean whether a timestamp occurs on the month start.
Easter.is_month_end
Return boolean whether a timestamp occurs on the month end.
Easter.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Easter.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Easter.is_year_start
Return boolean whether a timestamp occurs on the year start.
Easter.is_year_end
Return boolean whether a timestamp occurs on the year end.
Tick#
Tick
Attributes
Properties#
Tick.delta
Tick.freqstr
Return a string representing the frequency.
Tick.kwds
Return a dict of extra parameters for the offset.
Tick.name
Return a string representing the base frequency.
Tick.nanos
Return an integer of the total number of nanoseconds.
Tick.normalize
Tick.rule_code
Tick.n
Methods#
Tick.copy
Return a copy of the frequency.
Tick.isAnchored
Tick.onOffset
Tick.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Tick.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Tick.__call__(*args, **kwargs)
Call self as a function.
Tick.apply
Tick.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Tick.is_month_start
Return boolean whether a timestamp occurs on the month start.
Tick.is_month_end
Return boolean whether a timestamp occurs on the month end.
Tick.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Tick.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Tick.is_year_start
Return boolean whether a timestamp occurs on the year start.
Tick.is_year_end
Return boolean whether a timestamp occurs on the year end.
Day#
Day
Attributes
Properties#
Day.delta
Day.freqstr
Return a string representing the frequency.
Day.kwds
Return a dict of extra parameters for the offset.
Day.name
Return a string representing the base frequency.
Day.nanos
Return an integer of the total number of nanoseconds.
Day.normalize
Day.rule_code
Day.n
Methods#
Day.copy
Return a copy of the frequency.
Day.isAnchored
Day.onOffset
Day.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Day.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Day.__call__(*args, **kwargs)
Call self as a function.
Day.apply
Day.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Day.is_month_start
Return boolean whether a timestamp occurs on the month start.
Day.is_month_end
Return boolean whether a timestamp occurs on the month end.
Day.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Day.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Day.is_year_start
Return boolean whether a timestamp occurs on the year start.
Day.is_year_end
Return boolean whether a timestamp occurs on the year end.
Hour#
Hour
Attributes
Properties#
Hour.delta
Hour.freqstr
Return a string representing the frequency.
Hour.kwds
Return a dict of extra parameters for the offset.
Hour.name
Return a string representing the base frequency.
Hour.nanos
Return an integer of the total number of nanoseconds.
Hour.normalize
Hour.rule_code
Hour.n
Methods#
Hour.copy
Return a copy of the frequency.
Hour.isAnchored
Hour.onOffset
Hour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Hour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Hour.__call__(*args, **kwargs)
Call self as a function.
Hour.apply
Hour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Hour.is_month_start
Return boolean whether a timestamp occurs on the month start.
Hour.is_month_end
Return boolean whether a timestamp occurs on the month end.
Hour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Hour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Hour.is_year_start
Return boolean whether a timestamp occurs on the year start.
Hour.is_year_end
Return boolean whether a timestamp occurs on the year end.
Minute#
Minute
Attributes
Properties#
Minute.delta
Minute.freqstr
Return a string representing the frequency.
Minute.kwds
Return a dict of extra parameters for the offset.
Minute.name
Return a string representing the base frequency.
Minute.nanos
Return an integer of the total number of nanoseconds.
Minute.normalize
Minute.rule_code
Minute.n
Methods#
Minute.copy
Return a copy of the frequency.
Minute.isAnchored
Minute.onOffset
Minute.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Minute.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Minute.__call__(*args, **kwargs)
Call self as a function.
Minute.apply
Minute.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Minute.is_month_start
Return boolean whether a timestamp occurs on the month start.
Minute.is_month_end
Return boolean whether a timestamp occurs on the month end.
Minute.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Minute.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Minute.is_year_start
Return boolean whether a timestamp occurs on the year start.
Minute.is_year_end
Return boolean whether a timestamp occurs on the year end.
Second#
Second
Attributes
Properties#
Second.delta
Second.freqstr
Return a string representing the frequency.
Second.kwds
Return a dict of extra parameters for the offset.
Second.name
Return a string representing the base frequency.
Second.nanos
Return an integer of the total number of nanoseconds.
Second.normalize
Second.rule_code
Second.n
Methods#
Second.copy
Return a copy of the frequency.
Second.isAnchored
Second.onOffset
Second.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Second.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Second.__call__(*args, **kwargs)
Call self as a function.
Second.apply
Second.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Second.is_month_start
Return boolean whether a timestamp occurs on the month start.
Second.is_month_end
Return boolean whether a timestamp occurs on the month end.
Second.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Second.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Second.is_year_start
Return boolean whether a timestamp occurs on the year start.
Second.is_year_end
Return boolean whether a timestamp occurs on the year end.
Milli#
Milli
Attributes
Properties#
Milli.delta
Milli.freqstr
Return a string representing the frequency.
Milli.kwds
Return a dict of extra parameters for the offset.
Milli.name
Return a string representing the base frequency.
Milli.nanos
Return an integer of the total number of nanoseconds.
Milli.normalize
Milli.rule_code
Milli.n
Methods#
Milli.copy
Return a copy of the frequency.
Milli.isAnchored
Milli.onOffset
Milli.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Milli.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Milli.__call__(*args, **kwargs)
Call self as a function.
Milli.apply
Milli.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Milli.is_month_start
Return boolean whether a timestamp occurs on the month start.
Milli.is_month_end
Return boolean whether a timestamp occurs on the month end.
Milli.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Milli.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Milli.is_year_start
Return boolean whether a timestamp occurs on the year start.
Milli.is_year_end
Return boolean whether a timestamp occurs on the year end.
Micro#
Micro
Attributes
Properties#
Micro.delta
Micro.freqstr
Return a string representing the frequency.
Micro.kwds
Return a dict of extra parameters for the offset.
Micro.name
Return a string representing the base frequency.
Micro.nanos
Return an integer of the total number of nanoseconds.
Micro.normalize
Micro.rule_code
Micro.n
Methods#
Micro.copy
Return a copy of the frequency.
Micro.isAnchored
Micro.onOffset
Micro.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Micro.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Micro.__call__(*args, **kwargs)
Call self as a function.
Micro.apply
Micro.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Micro.is_month_start
Return boolean whether a timestamp occurs on the month start.
Micro.is_month_end
Return boolean whether a timestamp occurs on the month end.
Micro.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Micro.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Micro.is_year_start
Return boolean whether a timestamp occurs on the year start.
Micro.is_year_end
Return boolean whether a timestamp occurs on the year end.
Nano#
Nano
Attributes
Properties#
Nano.delta
Nano.freqstr
Return a string representing the frequency.
Nano.kwds
Return a dict of extra parameters for the offset.
Nano.name
Return a string representing the base frequency.
Nano.nanos
Return an integer of the total number of nanoseconds.
Nano.normalize
Nano.rule_code
Nano.n
Methods#
Nano.copy
Return a copy of the frequency.
Nano.isAnchored
Nano.onOffset
Nano.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Nano.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Nano.__call__(*args, **kwargs)
Call self as a function.
Nano.apply
Nano.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Nano.is_month_start
Return boolean whether a timestamp occurs on the month start.
Nano.is_month_end
Return boolean whether a timestamp occurs on the month end.
Nano.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Nano.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Nano.is_year_start
Return boolean whether a timestamp occurs on the year start.
Nano.is_year_end
Return boolean whether a timestamp occurs on the year end.
Frequencies#
to_offset
Return DateOffset object from string or datetime.timedelta object.
| reference/offset_frequency.html | null |
pandas.Index.get_level_values | `pandas.Index.get_level_values`
Return an Index of values for requested level.
```
>>> idx = pd.Index(list('abc'))
>>> idx
Index(['a', 'b', 'c'], dtype='object')
``` | Index.get_level_values(level)[source]#
Return an Index of values for requested level.
This is primarily useful to get an individual level of values from a
MultiIndex, but is provided on Index as well for compatibility.
Parameters
levelint or strIt is either the integer position or the name of the level.
Returns
IndexCalling object, as there is only one level in the Index.
See also
MultiIndex.get_level_valuesGet values for a level of a MultiIndex.
Notes
For Index, level should be 0, since there are no multiple levels.
Examples
>>> idx = pd.Index(list('abc'))
>>> idx
Index(['a', 'b', 'c'], dtype='object')
Get level values by supplying level as integer:
>>> idx.get_level_values(0)
Index(['a', 'b', 'c'], dtype='object')
| reference/api/pandas.Index.get_level_values.html |
pandas.Series.kurtosis | `pandas.Series.kurtosis`
Return unbiased kurtosis over requested axis. | Series.kurtosis(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return unbiased kurtosis over requested axis.
Kurtosis obtained using Fisher’s definition of
kurtosis (kurtosis of normal == 0.0). Normalized by N-1.
Parameters
axis{index (0)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
scalar or Series (if level specified)
| reference/api/pandas.Series.kurtosis.html |
pandas.Series.sem | `pandas.Series.sem`
Return unbiased standard error of the mean over requested axis. | Series.sem(axis=None, skipna=True, level=None, ddof=1, numeric_only=None, **kwargs)[source]#
Return unbiased standard error of the mean over requested axis.
Normalized by N-1 by default. This can be changed using the ddof argument
Parameters
axis{index (0)}For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations is N - ddof,
where N represents the number of elements.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
Returns
scalar or Series (if level specified)
| reference/api/pandas.Series.sem.html |
pandas.tseries.offsets.LastWeekOfMonth.rule_code | pandas.tseries.offsets.LastWeekOfMonth.rule_code | LastWeekOfMonth.rule_code#
| reference/api/pandas.tseries.offsets.LastWeekOfMonth.rule_code.html |
pandas.tseries.offsets.CustomBusinessHour | `pandas.tseries.offsets.CustomBusinessHour`
DateOffset subclass representing possibly n custom business days.
```
>>> ts = pd.Timestamp(2022, 8, 5, 16)
>>> ts + pd.offsets.CustomBusinessHour()
Timestamp('2022-08-08 09:00:00')
``` | class pandas.tseries.offsets.CustomBusinessHour#
DateOffset subclass representing possibly n custom business days.
Parameters
nint, default 1The number of months represented.
normalizebool, default FalseNormalize start/end dates to midnight before generating date range.
weekmaskstr, Default ‘Mon Tue Wed Thu Fri’Weekmask of valid business days, passed to numpy.busdaycalendar.
startstr, default “09:00”Start time of your custom business hour in 24h format.
endstr, default: “17:00”End time of your custom business hour in 24h format.
Examples
>>> ts = pd.Timestamp(2022, 8, 5, 16)
>>> ts + pd.offsets.CustomBusinessHour()
Timestamp('2022-08-08 09:00:00')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
next_bday
Used for moving to next business day.
offset
Alias for self._offset.
calendar
end
holidays
n
nanos
normalize
rule_code
start
weekmask
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback(other)
Roll provided date backward to next offset only if not on offset.
rollforward(other)
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
| reference/api/pandas.tseries.offsets.CustomBusinessHour.html |
pandas.DataFrame.rename | `pandas.DataFrame.rename`
Alter axes labels.
```
>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
>>> df.rename(columns={"A": "a", "B": "c"})
a c
0 1 4
1 2 5
2 3 6
``` | DataFrame.rename(mapper=None, *, index=None, columns=None, axis=None, copy=None, inplace=False, level=None, errors='ignore')[source]#
Alter axes labels.
Function / dict values must be unique (1-to-1). Labels not contained in
a dict / Series will be left as-is. Extra labels listed don’t throw an
error.
See the user guide for more.
Parameters
mapperdict-like or functionDict-like or function transformations to apply to
that axis’ values. Use either mapper and axis to
specify the axis to target with mapper, or index and
columns.
indexdict-like or functionAlternative to specifying axis (mapper, axis=0
is equivalent to index=mapper).
columnsdict-like or functionAlternative to specifying axis (mapper, axis=1
is equivalent to columns=mapper).
axis{0 or ‘index’, 1 or ‘columns’}, default 0Axis to target with mapper. Can be either the axis name
(‘index’, ‘columns’) or number (0, 1). The default is ‘index’.
copybool, default TrueAlso copy underlying data.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
If True then value of copy is ignored.
levelint or level name, default NoneIn case of a MultiIndex, only rename labels in the specified
level.
errors{‘ignore’, ‘raise’}, default ‘ignore’If ‘raise’, raise a KeyError when a dict-like mapper, index,
or columns contains labels that are not present in the Index
being transformed.
If ‘ignore’, existing keys will be renamed and extra keys will be
ignored.
Returns
DataFrame or NoneDataFrame with the renamed axis labels or None if inplace=True.
Raises
KeyErrorIf any of the labels is not found in the selected axis and
“errors=’raise’”.
See also
DataFrame.rename_axisSet the name of the axis.
Examples
DataFrame.rename supports two calling conventions
(index=index_mapper, columns=columns_mapper, ...)
(mapper, axis={'index', 'columns'}, ...)
We highly recommend using keyword arguments to clarify your
intent.
Rename columns using a mapping:
>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
>>> df.rename(columns={"A": "a", "B": "c"})
a c
0 1 4
1 2 5
2 3 6
Rename index using a mapping:
>>> df.rename(index={0: "x", 1: "y", 2: "z"})
A B
x 1 4
y 2 5
z 3 6
Cast index labels to a different type:
>>> df.index
RangeIndex(start=0, stop=3, step=1)
>>> df.rename(index=str).index
Index(['0', '1', '2'], dtype='object')
>>> df.rename(columns={"A": "a", "B": "b", "C": "c"}, errors="raise")
Traceback (most recent call last):
KeyError: ['C'] not found in axis
Using axis-style parameters:
>>> df.rename(str.lower, axis='columns')
a b
0 1 4
1 2 5
2 3 6
>>> df.rename({1: 2, 2: 4}, axis='index')
A B
0 1 4
2 2 5
4 3 6
| reference/api/pandas.DataFrame.rename.html |
pandas.tseries.offsets.MonthBegin.copy | `pandas.tseries.offsets.MonthBegin.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
``` | MonthBegin.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
| reference/api/pandas.tseries.offsets.MonthBegin.copy.html |
Python Module Index | p
p
pandas
| py-modindex.html | null |
pandas.api.extensions.ExtensionArray.shape | `pandas.api.extensions.ExtensionArray.shape`
Return a tuple of the array dimensions. | property ExtensionArray.shape[source]#
Return a tuple of the array dimensions.
| reference/api/pandas.api.extensions.ExtensionArray.shape.html |
pandas.Series.between_time | `pandas.Series.between_time`
Select values between particular times of the day (e.g., 9:00-9:30 AM).
```
>>> i = pd.date_range('2018-04-09', periods=4, freq='1D20min')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 00:00:00 1
2018-04-10 00:20:00 2
2018-04-11 00:40:00 3
2018-04-12 01:00:00 4
``` | Series.between_time(start_time, end_time, include_start=_NoDefault.no_default, include_end=_NoDefault.no_default, inclusive=None, axis=None)[source]#
Select values between particular times of the day (e.g., 9:00-9:30 AM).
By setting start_time to be later than end_time,
you can get the times that are not between the two times.
Parameters
start_timedatetime.time or strInitial time as a time filter limit.
end_timedatetime.time or strEnd time as a time filter limit.
include_startbool, default TrueWhether the start time needs to be included in the result.
Deprecated since version 1.4.0: Arguments include_start and include_end have been deprecated
to standardize boundary inputs. Use inclusive instead, to set
each bound as closed or open.
include_endbool, default TrueWhether the end time needs to be included in the result.
Deprecated since version 1.4.0: Arguments include_start and include_end have been deprecated
to standardize boundary inputs. Use inclusive instead, to set
each bound as closed or open.
inclusive{“both”, “neither”, “left”, “right”}, default “both”Include boundaries; whether to set each bound as closed or open.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Determine range time on index or columns value.
For Series this parameter is unused and defaults to 0.
Returns
Series or DataFrameData from the original object filtered to the specified dates range.
Raises
TypeErrorIf the index is not a DatetimeIndex
See also
at_timeSelect values at a particular time of the day.
firstSelect initial periods of time series based on a date offset.
lastSelect final periods of time series based on a date offset.
DatetimeIndex.indexer_between_timeGet just the index locations for values between particular times of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='1D20min')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 00:00:00 1
2018-04-10 00:20:00 2
2018-04-11 00:40:00 3
2018-04-12 01:00:00 4
>>> ts.between_time('0:15', '0:45')
A
2018-04-10 00:20:00 2
2018-04-11 00:40:00 3
You get the times that are not between two times by setting
start_time later than end_time:
>>> ts.between_time('0:45', '0:15')
A
2018-04-09 00:00:00 1
2018-04-12 01:00:00 4
| reference/api/pandas.Series.between_time.html |
pandas.errors.SettingWithCopyError | `pandas.errors.SettingWithCopyError`
Exception raised when trying to set on a copied slice from a DataFrame.
```
>>> pd.options.mode.chained_assignment = 'raise'
>>> df = pd.DataFrame({'A': [1, 1, 1, 2, 2]}, columns=['A'])
>>> df.loc[0:3]['A'] = 'a'
... # SettingWithCopyError: A value is trying to be set on a copy of a...
``` | exception pandas.errors.SettingWithCopyError[source]#
Exception raised when trying to set on a copied slice from a DataFrame.
The mode.chained_assignment needs to be set to set to ‘raise.’ This can
happen unintentionally when chained indexing.
For more information on eveluation order,
see the user guide.
For more information on view vs. copy,
see the user guide.
Examples
>>> pd.options.mode.chained_assignment = 'raise'
>>> df = pd.DataFrame({'A': [1, 1, 1, 2, 2]}, columns=['A'])
>>> df.loc[0:3]['A'] = 'a'
... # SettingWithCopyError: A value is trying to be set on a copy of a...
| reference/api/pandas.errors.SettingWithCopyError.html |
pandas.tseries.offsets.BusinessMonthBegin.is_quarter_end | `pandas.tseries.offsets.BusinessMonthBegin.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | BusinessMonthBegin.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.BusinessMonthBegin.is_quarter_end.html |
pandas.DatetimeIndex.round | `pandas.DatetimeIndex.round`
Perform round operation on the data to the specified freq.
The frequency level to round the index to. Must be a fixed
frequency like ‘S’ (second) not ‘ME’ (month end). See
frequency aliases for
a list of possible freq values.
```
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.round('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 12:00:00'],
dtype='datetime64[ns]', freq=None)
``` | DatetimeIndex.round(*args, **kwargs)[source]#
Perform round operation on the data to the specified freq.
Parameters
freqstr or OffsetThe frequency level to round the index to. Must be a fixed
frequency like ‘S’ (second) not ‘ME’ (month end). See
frequency aliases for
a list of possible freq values.
ambiguous‘infer’, bool-ndarray, ‘NaT’, default ‘raise’Only relevant for DatetimeIndex:
‘infer’ will attempt to infer fall dst-transition hours based on
order
bool-ndarray where True signifies a DST time, False designates
a non-DST time (note that this flag is only applicable for
ambiguous times)
‘NaT’ will return NaT where there are ambiguous times
‘raise’ will raise an AmbiguousTimeError if there are ambiguous
times.
nonexistent‘shift_forward’, ‘shift_backward’, ‘NaT’, timedelta, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time
‘NaT’ will return NaT where there are nonexistent times
timedelta objects will shift nonexistent times by the timedelta
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
DatetimeIndex, TimedeltaIndex, or SeriesIndex of the same type for a DatetimeIndex or TimedeltaIndex,
or a Series with the same index for a Series.
Raises
ValueError if the freq cannot be converted.
Notes
If the timestamps have a timezone, rounding will take place relative to the
local (“wall”) time and re-localized to the same timezone. When rounding
near daylight savings time, use nonexistent and ambiguous to
control the re-localization behavior.
Examples
DatetimeIndex
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.round('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 12:00:00'],
dtype='datetime64[ns]', freq=None)
Series
>>> pd.Series(rng).dt.round("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 12:00:00
dtype: datetime64[ns]
When rounding near a daylight savings time transition, use ambiguous or
nonexistent to control how the timestamp should be re-localized.
>>> rng_tz = pd.DatetimeIndex(["2021-10-31 03:30:00"], tz="Europe/Amsterdam")
>>> rng_tz.floor("2H", ambiguous=False)
DatetimeIndex(['2021-10-31 02:00:00+01:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
>>> rng_tz.floor("2H", ambiguous=True)
DatetimeIndex(['2021-10-31 02:00:00+02:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
| reference/api/pandas.DatetimeIndex.round.html |
pandas.merge_ordered | `pandas.merge_ordered`
Perform a merge for ordered data with optional filling/interpolation.
Designed for ordered data like time series data. Optionally
perform group-wise merge (see examples).
```
>>> df1 = pd.DataFrame(
... {
... "key": ["a", "c", "e", "a", "c", "e"],
... "lvalue": [1, 2, 3, 1, 2, 3],
... "group": ["a", "a", "a", "b", "b", "b"]
... }
... )
>>> df1
key lvalue group
0 a 1 a
1 c 2 a
2 e 3 a
3 a 1 b
4 c 2 b
5 e 3 b
``` | pandas.merge_ordered(left, right, on=None, left_on=None, right_on=None, left_by=None, right_by=None, fill_method=None, suffixes=('_x', '_y'), how='outer')[source]#
Perform a merge for ordered data with optional filling/interpolation.
Designed for ordered data like time series data. Optionally
perform group-wise merge (see examples).
Parameters
leftDataFrame
rightDataFrame
onlabel or listField names to join on. Must be found in both DataFrames.
left_onlabel or list, or array-likeField names to join on in left DataFrame. Can be a vector or list of
vectors of the length of the DataFrame to use a particular vector as
the join key instead of columns.
right_onlabel or list, or array-likeField names to join on in right DataFrame or vector/list of vectors per
left_on docs.
left_bycolumn name or list of column namesGroup left DataFrame by group columns and merge piece by piece with
right DataFrame.
right_bycolumn name or list of column namesGroup right DataFrame by group columns and merge piece by piece with
left DataFrame.
fill_method{‘ffill’, None}, default NoneInterpolation method for data.
suffixeslist-like, default is (“_x”, “_y”)A length-2 sequence where each element is optionally a string
indicating the suffix to add to overlapping column names in
left and right respectively. Pass a value of None instead
of a string to indicate that the column name from left or
right should be left as-is, with no suffix. At least one of the
values must not be None.
Changed in version 0.25.0.
how{‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘outer’
left: use only keys from left frame (SQL: left outer join)
right: use only keys from right frame (SQL: right outer join)
outer: use union of keys from both frames (SQL: full outer join)
inner: use intersection of keys from both frames (SQL: inner join).
Returns
DataFrameThe merged DataFrame output type will the be same as
‘left’, if it is a subclass of DataFrame.
See also
mergeMerge with a database-style join.
merge_asofMerge on nearest keys.
Examples
>>> df1 = pd.DataFrame(
... {
... "key": ["a", "c", "e", "a", "c", "e"],
... "lvalue": [1, 2, 3, 1, 2, 3],
... "group": ["a", "a", "a", "b", "b", "b"]
... }
... )
>>> df1
key lvalue group
0 a 1 a
1 c 2 a
2 e 3 a
3 a 1 b
4 c 2 b
5 e 3 b
>>> df2 = pd.DataFrame({"key": ["b", "c", "d"], "rvalue": [1, 2, 3]})
>>> df2
key rvalue
0 b 1
1 c 2
2 d 3
>>> merge_ordered(df1, df2, fill_method="ffill", left_by="group")
key lvalue group rvalue
0 a 1 a NaN
1 b 1 a 1.0
2 c 2 a 2.0
3 d 2 a 3.0
4 e 3 a 3.0
5 a 1 b NaN
6 b 1 b 1.0
7 c 2 b 2.0
8 d 2 b 3.0
9 e 3 b 3.0
| reference/api/pandas.merge_ordered.html |
pandas.tseries.offsets.CustomBusinessMonthEnd.is_year_end | `pandas.tseries.offsets.CustomBusinessMonthEnd.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | CustomBusinessMonthEnd.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.is_year_end.html |
Contributing to the code base | Contributing to the code base
Table of Contents:
Code standards
Pre-commit
Optional dependencies
Backwards compatibility | Table of Contents:
Code standards
Pre-commit
Optional dependencies
Backwards compatibility
Type hints
Style guidelines
pandas-specific types
Validating type hints
Testing type hints in code using pandas
Testing with continuous integration
Test-driven development
Writing tests
Using pytest
Test structure
Preferred pytest idioms
Testing a warning
Testing an exception
Testing involving files
Testing involving network connectivity
Example
Using hypothesis
Running the test suite
Running the performance test suite
Documenting your code
Code standards#
Writing good code is not just about what you write. It is also about how you
write it. During Continuous Integration testing, several
tools will be run to check your code for stylistic errors.
Generating any warnings will cause the test to fail.
Thus, good style is a requirement for submitting code to pandas.
There is a tool in pandas to help contributors verify their changes before
contributing them to the project:
./ci/code_checks.sh
The script validates the doctests, formatting in docstrings, and
imported modules. It is possible to run the checks independently by using the
parameters docstring, code, and doctests
(e.g. ./ci/code_checks.sh doctests).
In addition, because a lot of people use our library, it is important that we
do not make sudden changes to the code that could have the potential to break
a lot of user code as a result, that is, we need it to be as backwards compatible
as possible to avoid mass breakages.
In addition to ./ci/code_checks.sh, some extra checks (including static type
checking) are run by pre-commit - see here
for how to run them.
Pre-commit#
Additionally, Continuous Integration will run code formatting checks
like black, flake8 (including a pandas-dev-flaker plugin),
isort, and cpplint and more using pre-commit hooks
Any warnings from these checks will cause the Continuous Integration to fail; therefore,
it is helpful to run the check yourself before submitting code. This
can be done by installing pre-commit:
pip install pre-commit
and then running:
pre-commit install
from the root of the pandas repository. Now all of the styling checks will be
run each time you commit changes without your needing to run each one manually.
In addition, using pre-commit will also allow you to more easily
remain up-to-date with our code checks as they change.
Note that if needed, you can skip these checks with git commit --no-verify.
If you don’t want to use pre-commit as part of your workflow, you can still use it
to run its checks with:
pre-commit run --files <files you have modified>
without needing to have done pre-commit install beforehand.
If you want to run checks on all recently committed files on upstream/main you can use:
pre-commit run --from-ref=upstream/main --to-ref=HEAD --all-files
without needing to have done pre-commit install beforehand.
Note
You may want to periodically run pre-commit gc, to clean up repos
which are no longer used.
Note
If you have conflicting installations of virtualenv, then you may get an
error - see here.
Also, due to a bug in virtualenv,
you may run into issues if you’re using conda. To solve this, you can downgrade
virtualenv to version 20.0.33.
Optional dependencies#
Optional dependencies (e.g. matplotlib) should be imported with the private helper
pandas.compat._optional.import_optional_dependency. This ensures a
consistent error message when the dependency is not met.
All methods using an optional dependency should include a test asserting that an
ImportError is raised when the optional dependency is not found. This test
should be skipped if the library is present.
All optional dependencies should be documented in
Optional dependencies and the minimum required version should be
set in the pandas.compat._optional.VERSIONS dict.
Backwards compatibility#
Please try to maintain backward compatibility. pandas has lots of users with lots of
existing code, so don’t break it if at all possible. If you think breakage is required,
clearly state why as part of the pull request. Also, be careful when changing method
signatures and add deprecation warnings where needed. Also, add the deprecated sphinx
directive to the deprecated functions or methods.
If a function with the same arguments as the one being deprecated exist, you can use
the pandas.util._decorators.deprecate:
from pandas.util._decorators import deprecate
deprecate('old_func', 'new_func', '1.1.0')
Otherwise, you need to do it manually:
import warnings
from pandas.util._exceptions import find_stack_level
def old_func():
"""Summary of the function.
.. deprecated:: 1.1.0
Use new_func instead.
"""
warnings.warn(
'Use new_func instead.',
FutureWarning,
stacklevel=find_stack_level(),
)
new_func()
def new_func():
pass
You’ll also need to
Write a new test that asserts a warning is issued when calling with the deprecated argument
Update all of pandas existing tests and code to use the new argument
See Testing a warning for more.
Type hints#
pandas strongly encourages the use of PEP 484 style type hints. New development should contain type hints and pull requests to annotate existing code are accepted as well!
Style guidelines#
Type imports should follow the from typing import ... convention. Some types do not need to be imported since PEP 585 some builtin constructs, such as list and tuple, can directly be used for type annotations. So rather than
import typing
primes: typing.List[int] = []
You should write
primes: list[int] = []
Optional should be avoided in favor of the shorter | None, so instead of
from typing import Union
maybe_primes: list[Union[int, None]] = []
or
from typing import Optional
maybe_primes: list[Optional[int]] = []
You should write
from __future__ import annotations # noqa: F404
maybe_primes: list[int | None] = []
In some cases in the code base classes may define class variables that shadow builtins. This causes an issue as described in Mypy 1775. The defensive solution here is to create an unambiguous alias of the builtin and use that without your annotation. For example, if you come across a definition like
class SomeClass1:
str = None
The appropriate way to annotate this would be as follows
str_type = str
class SomeClass2:
str: str_type = None
In some cases you may be tempted to use cast from the typing module when you know better than the analyzer. This occurs particularly when using custom inference functions. For example
from typing import cast
from pandas.core.dtypes.common import is_number
def cannot_infer_bad(obj: Union[str, int, float]):
if is_number(obj):
...
else: # Reasonably only str objects would reach this but...
obj = cast(str, obj) # Mypy complains without this!
return obj.upper()
The limitation here is that while a human can reasonably understand that is_number would catch the int and float types mypy cannot make that same inference just yet (see mypy #5206. While the above works, the use of cast is strongly discouraged. Where applicable a refactor of the code to appease static analysis is preferable
def cannot_infer_good(obj: Union[str, int, float]):
if isinstance(obj, str):
return obj.upper()
else:
...
With custom types and inference this is not always possible so exceptions are made, but every effort should be exhausted to avoid cast before going down such paths.
pandas-specific types#
Commonly used types specific to pandas will appear in pandas._typing and you should use these where applicable. This module is private for now but ultimately this should be exposed to third party libraries who want to implement type checking against pandas.
For example, quite a few functions in pandas accept a dtype argument. This can be expressed as a string like "object", a numpy.dtype like np.int64 or even a pandas ExtensionDtype like pd.CategoricalDtype. Rather than burden the user with having to constantly annotate all of those options, this can simply be imported and reused from the pandas._typing module
from pandas._typing import Dtype
def as_type(dtype: Dtype) -> ...:
...
This module will ultimately house types for repeatedly used concepts like “path-like”, “array-like”, “numeric”, etc… and can also hold aliases for commonly appearing parameters like axis. Development of this module is active so be sure to refer to the source for the most up to date list of available types.
Validating type hints#
pandas uses mypy and pyright to statically analyze the code base and type hints. After making any change you can ensure your type hints are correct by running
# the following might fail if the installed pandas version does not correspond to your local git version
pre-commit run --hook-stage manual --all-files
# if the above fails due to stubtest
SKIP=stubtest pre-commit run --hook-stage manual --all-files
in your activated python environment. A recent version of numpy (>=1.22.0) is required for type validation.
Testing type hints in code using pandas#
Warning
Pandas is not yet a py.typed library (PEP 561)!
The primary purpose of locally declaring pandas as a py.typed library is to test and
improve the pandas-builtin type annotations.
Until pandas becomes a py.typed library, it is possible to easily experiment with the type
annotations shipped with pandas by creating an empty file named “py.typed” in the pandas
installation folder:
python -c "import pandas; import pathlib; (pathlib.Path(pandas.__path__[0]) / 'py.typed').touch()"
The existence of the py.typed file signals to type checkers that pandas is already a py.typed
library. This makes type checkers aware of the type annotations shipped with pandas.
Testing with continuous integration#
The pandas test suite will run automatically on GitHub Actions
continuous integration services, once your pull request is submitted.
However, if you wish to run the test suite on a branch prior to submitting the pull request,
then the continuous integration services need to be hooked to your GitHub repository. Instructions are here
for GitHub Actions.
A pull-request will be considered for merging when you have an all ‘green’ build. If any tests are failing,
then you will get a red ‘X’, where you can click through to see the individual failed tests.
This is an example of a green build.
Test-driven development#
pandas is serious about testing and strongly encourages contributors to embrace
test-driven development (TDD).
This development process “relies on the repetition of a very short development cycle:
first the developer writes an (initially failing) automated test case that defines a desired
improvement or new function, then produces the minimum amount of code to pass that test.”
So, before actually writing any code, you should write your tests. Often the test can be
taken from the original GitHub issue. However, it is always worth considering additional
use cases and writing corresponding tests.
Adding tests is one of the most common requests after code is pushed to pandas. Therefore,
it is worth getting in the habit of writing tests ahead of time so this is never an issue.
Writing tests#
All tests should go into the tests subdirectory of the specific package.
This folder contains many current examples of tests, and we suggest looking to these for
inspiration. Ideally, there should be one, and only one, obvious place for a test to reside.
Until we reach that ideal, these are some rules of thumb for where a test should
be located.
Does your test depend only on code in pd._libs.tslibs?
This test likely belongs in one of:
tests.tslibs
Note
No file in tests.tslibs should import from any pandas modules
outside of pd._libs.tslibs
tests.scalar
tests.tseries.offsets
Does your test depend only on code in pd._libs?
This test likely belongs in one of:
tests.libs
tests.groupby.test_libgroupby
Is your test for an arithmetic or comparison method?
This test likely belongs in one of:
tests.arithmetic
Note
These are intended for tests that can be shared to test the behavior
of DataFrame/Series/Index/ExtensionArray using the box_with_array
fixture.
tests.frame.test_arithmetic
tests.series.test_arithmetic
Is your test for a reduction method (min, max, sum, prod, …)?
This test likely belongs in one of:
tests.reductions
Note
These are intended for tests that can be shared to test the behavior
of DataFrame/Series/Index/ExtensionArray.
tests.frame.test_reductions
tests.series.test_reductions
tests.test_nanops
Is your test for an indexing method?
This is the most difficult case for deciding where a test belongs, because
there are many of these tests, and many of them test more than one method
(e.g. both Series.__getitem__ and Series.loc.__getitem__)
Is the test specifically testing an Index method (e.g. Index.get_loc,
Index.get_indexer)?
This test likely belongs in one of:
tests.indexes.test_indexing
tests.indexes.fooindex.test_indexing
Within that files there should be a method-specific test class e.g.
TestGetLoc.
In most cases, neither Series nor DataFrame objects should be
needed in these tests.
Is the test for a Series or DataFrame indexing method other than
__getitem__ or __setitem__, e.g. xs, where, take,
mask, lookup, or insert?
This test likely belongs in one of:
tests.frame.indexing.test_methodname
tests.series.indexing.test_methodname
Is the test for any of loc, iloc, at, or iat?
This test likely belongs in one of:
tests.indexing.test_loc
tests.indexing.test_iloc
tests.indexing.test_at
tests.indexing.test_iat
Within the appropriate file, test classes correspond to either types of
indexers (e.g. TestLocBooleanMask) or major use cases
(e.g. TestLocSetitemWithExpansion).
See the note in section D) about tests that test multiple indexing methods.
Is the test for Series.__getitem__, Series.__setitem__,
DataFrame.__getitem__, or DataFrame.__setitem__?
This test likely belongs in one of:
tests.series.test_getitem
tests.series.test_setitem
tests.frame.test_getitem
tests.frame.test_setitem
If many cases such a test may test multiple similar methods, e.g.
import pandas as pd
import pandas._testing as tm
def test_getitem_listlike_of_ints():
ser = pd.Series(range(5))
result = ser[[3, 4]]
expected = pd.Series([2, 3])
tm.assert_series_equal(result, expected)
result = ser.loc[[3, 4]]
tm.assert_series_equal(result, expected)
In cases like this, the test location should be based on the underlying
method being tested. Or in the case of a test for a bugfix, the location
of the actual bug. So in this example, we know that Series.__getitem__
calls Series.loc.__getitem__, so this is really a test for
loc.__getitem__. So this test belongs in tests.indexing.test_loc.
Is your test for a DataFrame or Series method?
Is the method a plotting method?
This test likely belongs in one of:
tests.plotting
Is the method an IO method?
This test likely belongs in one of:
tests.io
Otherwise
This test likely belongs in one of:
tests.series.methods.test_mymethod
tests.frame.methods.test_mymethod
Note
If a test can be shared between DataFrame/Series using the
frame_or_series fixture, by convention it goes in the
tests.frame file.
Is your test for an Index method, not depending on Series/DataFrame?
This test likely belongs in one of:
tests.indexes
Is your test for one of the pandas-provided ExtensionArrays (Categorical,
DatetimeArray, TimedeltaArray, PeriodArray, IntervalArray,
PandasArray, FloatArray, BoolArray, StringArray)?
This test likely belongs in one of:
tests.arrays
Is your test for all ExtensionArray subclasses (the “EA Interface”)?
This test likely belongs in one of:
tests.extension
Using pytest#
Test structure#
pandas existing test structure is mostly class-based, meaning that you will typically find tests wrapped in a class.
class TestReallyCoolFeature:
def test_cool_feature_aspect(self):
pass
We prefer a more functional style using the pytest framework, which offers a richer testing
framework that will facilitate testing and developing. Thus, instead of writing test classes, we will write test functions like this:
def test_really_cool_feature():
pass
Preferred pytest idioms#
Functional tests named def test_* and only take arguments that are either fixtures or parameters.
Use a bare assert for testing scalars and truth-testing
Use tm.assert_series_equal(result, expected) and tm.assert_frame_equal(result, expected) for comparing Series and DataFrame results respectively.
Use @pytest.mark.parameterize when testing multiple cases.
Use pytest.mark.xfail when a test case is expected to fail.
Use pytest.mark.skip when a test case is never expected to pass.
Use pytest.param when a test case needs a particular mark.
Use @pytest.fixture if multiple tests can share a setup object.
Warning
Do not use pytest.xfail (which is different than pytest.mark.xfail) since it immediately stops the
test and does not check if the test will fail. If this is the behavior you desire, use pytest.skip instead.
If a test is known to fail but the manner in which it fails
is not meant to be captured, use pytest.mark.xfail It is common to use this method for a test that
exhibits buggy behavior or a non-implemented feature. If
the failing test has flaky behavior, use the argument strict=False. This
will make it so pytest does not fail if the test happens to pass.
Prefer the decorator @pytest.mark.xfail and the argument pytest.param
over usage within a test so that the test is appropriately marked during the
collection phase of pytest. For xfailing a test that involves multiple
parameters, a fixture, or a combination of these, it is only possible to
xfail during the testing phase. To do so, use the request fixture:
def test_xfail(request):
mark = pytest.mark.xfail(raises=TypeError, reason="Indicate why here")
request.node.add_marker(mark)
xfail is not to be used for tests involving failure due to invalid user arguments.
For these tests, we need to verify the correct exception type and error message
is being raised, using pytest.raises instead.
Testing a warning#
Use tm.assert_produces_warning as a context manager to check that a block of code raises a warning.
with tm.assert_produces_warning(DeprecationWarning):
pd.deprecated_function()
If a warning should specifically not happen in a block of code, pass False into the context manager.
with tm.assert_produces_warning(False):
pd.no_warning_function()
If you have a test that would emit a warning, but you aren’t actually testing the
warning itself (say because it’s going to be removed in the future, or because we’re
matching a 3rd-party library’s behavior), then use pytest.mark.filterwarnings to
ignore the error.
@pytest.mark.filterwarnings("ignore:msg:category")
def test_thing(self):
pass
If you need finer-grained control, you can use Python’s
warnings module
to control whether a warning is ignored or raised at different places within
a single test.
with warnings.catch_warnings():
warnings.simplefilter("ignore", FutureWarning)
Testing an exception#
Use pytest.raises as a context manager
with the specific exception subclass (i.e. never use Exception) and the exception message in match.
with pytest.raises(ValueError, match="an error"):
raise ValueError("an error")
Testing involving files#
The tm.ensure_clean context manager creates a temporary file for testing,
with a generated filename (or your filename if provided), that is automatically
deleted when the context block is exited.
with tm.ensure_clean('my_file_path') as path:
# do something with the path
Testing involving network connectivity#
It is highly discouraged to add a test that connects to the internet due to flakiness of network connections and
lack of ownership of the server that is being connected to. If network connectivity is absolutely required, use the
tm.network decorator.
@tm.network # noqa
def test_network():
result = package.call_to_internet()
If the test requires data from a specific website, specify check_before_test=True and the site in the decorator.
@tm.network("https://www.somespecificsite.com", check_before_test=True)
def test_network():
result = pd.read_html("https://www.somespecificsite.com")
Example#
Here is an example of a self-contained set of tests in a file pandas/tests/test_cool_feature.py
that illustrate multiple features that we like to use. Please remember to add the Github Issue Number
as a comment to a new test.
import pytest
import numpy as np
import pandas as pd
@pytest.mark.parametrize('dtype', ['int8', 'int16', 'int32', 'int64'])
def test_dtypes(dtype):
assert str(np.dtype(dtype)) == dtype
@pytest.mark.parametrize(
'dtype', ['float32', pytest.param('int16', marks=pytest.mark.skip),
pytest.param('int32', marks=pytest.mark.xfail(
reason='to show how it works'))])
def test_mark(dtype):
assert str(np.dtype(dtype)) == 'float32'
@pytest.fixture
def series():
return pd.Series([1, 2, 3])
@pytest.fixture(params=['int8', 'int16', 'int32', 'int64'])
def dtype(request):
return request.param
def test_series(series, dtype):
# GH <issue_number>
result = series.astype(dtype)
assert result.dtype == dtype
expected = pd.Series([1, 2, 3], dtype=dtype)
tm.assert_series_equal(result, expected)
A test run of this yields
((pandas) bash-3.2$ pytest test_cool_feature.py -v
=========================== test session starts ===========================
platform darwin -- Python 3.6.2, pytest-3.6.0, py-1.4.31, pluggy-0.4.0
collected 11 items
tester.py::test_dtypes[int8] PASSED
tester.py::test_dtypes[int16] PASSED
tester.py::test_dtypes[int32] PASSED
tester.py::test_dtypes[int64] PASSED
tester.py::test_mark[float32] PASSED
tester.py::test_mark[int16] SKIPPED
tester.py::test_mark[int32] xfail
tester.py::test_series[int8] PASSED
tester.py::test_series[int16] PASSED
tester.py::test_series[int32] PASSED
tester.py::test_series[int64] PASSED
Tests that we have parametrized are now accessible via the test name, for example we could run these with -k int8 to sub-select only those tests which match int8.
((pandas) bash-3.2$ pytest test_cool_feature.py -v -k int8
=========================== test session starts ===========================
platform darwin -- Python 3.6.2, pytest-3.6.0, py-1.4.31, pluggy-0.4.0
collected 11 items
test_cool_feature.py::test_dtypes[int8] PASSED
test_cool_feature.py::test_series[int8] PASSED
Using hypothesis#
Hypothesis is a library for property-based testing. Instead of explicitly
parametrizing a test, you can describe all valid inputs and let Hypothesis
try to find a failing input. Even better, no matter how many random examples
it tries, Hypothesis always reports a single minimal counterexample to your
assertions - often an example that you would never have thought to test.
See Getting Started with Hypothesis
for more of an introduction, then refer to the Hypothesis documentation
for details.
import json
from hypothesis import given, strategies as st
any_json_value = st.deferred(lambda: st.one_of(
st.none(), st.booleans(), st.floats(allow_nan=False), st.text(),
st.lists(any_json_value), st.dictionaries(st.text(), any_json_value)
))
@given(value=any_json_value)
def test_json_roundtrip(value):
result = json.loads(json.dumps(value))
assert value == result
This test shows off several useful features of Hypothesis, as well as
demonstrating a good use-case: checking properties that should hold over
a large or complicated domain of inputs.
To keep the pandas test suite running quickly, parametrized tests are
preferred if the inputs or logic are simple, with Hypothesis tests reserved
for cases with complex logic or where there are too many combinations of
options or subtle interactions to test (or think of!) all of them.
Running the test suite#
The tests can then be run directly inside your Git clone (without having to
install pandas) by typing:
pytest pandas
Often it is worth running only a subset of tests first around your changes before running the
entire suite.
The easiest way to do this is with:
pytest pandas/path/to/test.py -k regex_matching_test_name
Or with one of the following constructs:
pytest pandas/tests/[test-module].py
pytest pandas/tests/[test-module].py::[TestClass]
pytest pandas/tests/[test-module].py::[TestClass]::[test_method]
Using pytest-xdist, one can
speed up local testing on multicore machines. To use this feature, you will
need to install pytest-xdist via:
pip install pytest-xdist
Two scripts are provided to assist with this. These scripts distribute
testing across 4 threads.
On Unix variants, one can type:
test_fast.sh
On Windows, one can type:
test_fast.bat
This can significantly reduce the time it takes to locally run tests before
submitting a pull request.
For more, see the pytest documentation.
Furthermore one can run
pd.test()
with an imported pandas to run tests similarly.
Running the performance test suite#
Performance matters and it is worth considering whether your code has introduced
performance regressions. pandas is in the process of migrating to
asv benchmarks
to enable easy monitoring of the performance of critical pandas operations.
These benchmarks are all found in the pandas/asv_bench directory, and the
test results can be found here.
To use all features of asv, you will need either conda or
virtualenv. For more details please check the asv installation
webpage.
To install asv:
pip install git+https://github.com/airspeed-velocity/asv
If you need to run a benchmark, change your directory to asv_bench/ and run:
asv continuous -f 1.1 upstream/main HEAD
You can replace HEAD with the name of the branch you are working on,
and report benchmarks that changed by more than 10%.
The command uses conda by default for creating the benchmark
environments. If you want to use virtualenv instead, write:
asv continuous -f 1.1 -E virtualenv upstream/main HEAD
The -E virtualenv option should be added to all asv commands
that run benchmarks. The default value is defined in asv.conf.json.
Running the full benchmark suite can be an all-day process, depending on your
hardware and its resource utilization. However, usually it is sufficient to paste
only a subset of the results into the pull request to show that the committed changes
do not cause unexpected performance regressions. You can run specific benchmarks
using the -b flag, which takes a regular expression. For example, this will
only run benchmarks from a pandas/asv_bench/benchmarks/groupby.py file:
asv continuous -f 1.1 upstream/main HEAD -b ^groupby
If you want to only run a specific group of benchmarks from a file, you can do it
using . as a separator. For example:
asv continuous -f 1.1 upstream/main HEAD -b groupby.GroupByMethods
will only run the GroupByMethods benchmark defined in groupby.py.
You can also run the benchmark suite using the version of pandas
already installed in your current Python environment. This can be
useful if you do not have virtualenv or conda, or are using the
setup.py develop approach discussed above; for the in-place build
you need to set PYTHONPATH, e.g.
PYTHONPATH="$PWD/.." asv [remaining arguments].
You can run benchmarks using an existing Python
environment by:
asv run -e -E existing
or, to use a specific Python interpreter,:
asv run -e -E existing:python3.6
This will display stderr from the benchmarks, and use your local
python that comes from your $PATH.
Information on how to write a benchmark and how to use asv can be found in the
asv documentation.
Documenting your code#
Changes should be reflected in the release notes located in doc/source/whatsnew/vx.y.z.rst.
This file contains an ongoing change log for each release. Add an entry to this file to
document your fix, enhancement or (unavoidable) breaking change. Make sure to include the
GitHub issue number when adding your entry (using :issue:`1234` where 1234 is the
issue/pull request number). Your entry should be written using full sentences and proper
grammar.
When mentioning parts of the API, use a Sphinx :func:, :meth:, or :class:
directive as appropriate. Not all public API functions and methods have a
documentation page; ideally links would only be added if they resolve. You can
usually find similar examples by checking the release notes for one of the previous
versions.
If your code is a bugfix, add your entry to the relevant bugfix section. Avoid
adding to the Other section; only in rare cases should entries go there.
Being as concise as possible, the description of the bug should include how the
user may encounter it and an indication of the bug itself, e.g.
“produces incorrect results” or “incorrectly raises”. It may be necessary to also
indicate the new behavior.
If your code is an enhancement, it is most likely necessary to add usage
examples to the existing documentation. This can be done following the section
regarding documentation.
Further, to let users know when this feature was added, the versionadded
directive is used. The sphinx syntax for that is:
.. versionadded:: 1.1.0
This will put the text New in version 1.1.0 wherever you put the sphinx
directive. This should also be put in the docstring when adding a new function
or method (example)
or a new keyword argument (example).
| development/contributing_codebase.html |
pandas.tseries.offsets.YearEnd.normalize | pandas.tseries.offsets.YearEnd.normalize | YearEnd.normalize#
| reference/api/pandas.tseries.offsets.YearEnd.normalize.html |
pandas.tseries.offsets.CustomBusinessMonthEnd | `pandas.tseries.offsets.CustomBusinessMonthEnd`
Attributes
base | class pandas.tseries.offsets.CustomBusinessMonthEnd#
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
cbday_roll
Define default roll function to be called in apply method.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
month_roll
Define default roll function to be called in apply method.
name
Return a string representing the base frequency.
offset
Alias for self._offset.
calendar
holidays
m_offset
n
nanos
normalize
rule_code
weekmask
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
| reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.html |
pandas.core.groupby.GroupBy.head | `pandas.core.groupby.GroupBy.head`
Return first n rows of each group.
```
>>> df = pd.DataFrame([[1, 2], [1, 4], [5, 6]],
... columns=['A', 'B'])
>>> df.groupby('A').head(1)
A B
0 1 2
2 5 6
>>> df.groupby('A').head(-1)
A B
0 1 2
``` | final GroupBy.head(n=5)[source]#
Return first n rows of each group.
Similar to .apply(lambda x: x.head(n)), but it returns a subset of rows
from the original DataFrame with original index and order preserved
(as_index flag is ignored).
Parameters
nintIf positive: number of entries to include from start of each group.
If negative: number of entries to exclude from end of each group.
Returns
Series or DataFrameSubset of original Series or DataFrame as determined by n.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
Examples
>>> df = pd.DataFrame([[1, 2], [1, 4], [5, 6]],
... columns=['A', 'B'])
>>> df.groupby('A').head(1)
A B
0 1 2
2 5 6
>>> df.groupby('A').head(-1)
A B
0 1 2
| reference/api/pandas.core.groupby.GroupBy.head.html |
pandas.Series.str.islower | `pandas.Series.str.islower`
Check whether all characters in each string are lowercase.
This is equivalent to running the Python string method
str.islower() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
``` | Series.str.islower()[source]#
Check whether all characters in each string are lowercase.
This is equivalent to running the Python string method
str.islower() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
| reference/api/pandas.Series.str.islower.html |
pandas.DataFrame.value_counts | `pandas.DataFrame.value_counts`
Return a Series containing counts of unique rows in the DataFrame.
```
>>> df = pd.DataFrame({'num_legs': [2, 4, 4, 6],
... 'num_wings': [2, 0, 0, 0]},
... index=['falcon', 'dog', 'cat', 'ant'])
>>> df
num_legs num_wings
falcon 2 2
dog 4 0
cat 4 0
ant 6 0
``` | DataFrame.value_counts(subset=None, normalize=False, sort=True, ascending=False, dropna=True)[source]#
Return a Series containing counts of unique rows in the DataFrame.
New in version 1.1.0.
Parameters
subsetlist-like, optionalColumns to use when counting unique combinations.
normalizebool, default FalseReturn proportions rather than frequencies.
sortbool, default TrueSort by frequencies.
ascendingbool, default FalseSort in ascending order.
dropnabool, default TrueDon’t include counts of rows that contain NA values.
New in version 1.3.0.
Returns
Series
See also
Series.value_countsEquivalent method on Series.
Notes
The returned Series will have a MultiIndex with one level per input
column. By default, rows that contain any NA values are omitted from
the result. By default, the resulting Series will be in descending
order so that the first element is the most frequently-occurring row.
Examples
>>> df = pd.DataFrame({'num_legs': [2, 4, 4, 6],
... 'num_wings': [2, 0, 0, 0]},
... index=['falcon', 'dog', 'cat', 'ant'])
>>> df
num_legs num_wings
falcon 2 2
dog 4 0
cat 4 0
ant 6 0
>>> df.value_counts()
num_legs num_wings
4 0 2
2 2 1
6 0 1
dtype: int64
>>> df.value_counts(sort=False)
num_legs num_wings
2 2 1
4 0 2
6 0 1
dtype: int64
>>> df.value_counts(ascending=True)
num_legs num_wings
2 2 1
6 0 1
4 0 2
dtype: int64
>>> df.value_counts(normalize=True)
num_legs num_wings
4 0 0.50
2 2 0.25
6 0 0.25
dtype: float64
With dropna set to False we can also count rows with NA values.
>>> df = pd.DataFrame({'first_name': ['John', 'Anne', 'John', 'Beth'],
... 'middle_name': ['Smith', pd.NA, pd.NA, 'Louise']})
>>> df
first_name middle_name
0 John Smith
1 Anne <NA>
2 John <NA>
3 Beth Louise
>>> df.value_counts()
first_name middle_name
Beth Louise 1
John Smith 1
dtype: int64
>>> df.value_counts(dropna=False)
first_name middle_name
Anne NaN 1
Beth Louise 1
John Smith 1
NaN 1
dtype: int64
| reference/api/pandas.DataFrame.value_counts.html |
pandas.tseries.offsets.SemiMonthBegin.is_anchored | `pandas.tseries.offsets.SemiMonthBegin.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | SemiMonthBegin.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.SemiMonthBegin.is_anchored.html |
pandas.Series.iteritems | `pandas.Series.iteritems`
Lazily iterate over (index, value) tuples. | Series.iteritems()[source]#
Lazily iterate over (index, value) tuples.
Deprecated since version 1.5.0: iteritems is deprecated and will be removed in a future version.
Use .items instead.
This method returns an iterable tuple (index, value). This is
convenient if you want to create a lazy iterator.
Returns
iterableIterable of tuples containing the (index, value) pairs from a
Series.
See also
Series.itemsRecommended alternative.
DataFrame.itemsIterate over (column name, Series) pairs.
DataFrame.iterrowsIterate over DataFrame rows as (index, Series) pairs.
| reference/api/pandas.Series.iteritems.html |
pandas.DataFrame.to_sql | `pandas.DataFrame.to_sql`
Write records stored in a DataFrame to a SQL database.
```
>>> from sqlalchemy import create_engine
>>> engine = create_engine('sqlite://', echo=False)
``` | DataFrame.to_sql(name, con, schema=None, if_exists='fail', index=True, index_label=None, chunksize=None, dtype=None, method=None)[source]#
Write records stored in a DataFrame to a SQL database.
Databases supported by SQLAlchemy [1] are supported. Tables can be
newly created, appended to, or overwritten.
Parameters
namestrName of SQL table.
consqlalchemy.engine.(Engine or Connection) or sqlite3.ConnectionUsing SQLAlchemy makes it possible to use any DB supported by that
library. Legacy support is provided for sqlite3.Connection objects. The user
is responsible for engine disposal and connection closure for the SQLAlchemy
connectable See here.
schemastr, optionalSpecify the schema (if database flavor supports this). If None, use
default schema.
if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’How to behave if the table already exists.
fail: Raise a ValueError.
replace: Drop the table before inserting new values.
append: Insert new values to the existing table.
indexbool, default TrueWrite DataFrame index as a column. Uses index_label as the column
name in the table.
index_labelstr or sequence, default NoneColumn label for index column(s). If None is given (default) and
index is True, then the index names are used.
A sequence should be given if the DataFrame uses MultiIndex.
chunksizeint, optionalSpecify the number of rows in each batch to be written at a time.
By default, all rows will be written at once.
dtypedict or scalar, optionalSpecifying the datatype for columns. If a dictionary is used, the
keys should be the column names and the values should be the
SQLAlchemy types or strings for the sqlite3 legacy mode. If a
scalar is provided, it will be applied to all columns.
method{None, ‘multi’, callable}, optionalControls the SQL insertion clause used:
None : Uses standard SQL INSERT clause (one per row).
‘multi’: Pass multiple values in a single INSERT clause.
callable with signature (pd_table, conn, keys, data_iter).
Details and a sample callable implementation can be found in the
section insert method.
Returns
None or intNumber of rows affected by to_sql. None is returned if the callable
passed into method does not return an integer number of rows.
The number of returned rows affected is the sum of the rowcount
attribute of sqlite3.Cursor or SQLAlchemy connectable which may not
reflect the exact number of written rows as stipulated in the
sqlite3 or
SQLAlchemy.
New in version 1.4.0.
Raises
ValueErrorWhen the table already exists and if_exists is ‘fail’ (the
default).
See also
read_sqlRead a DataFrame from a table.
Notes
Timezone aware datetime columns will be written as
Timestamp with timezone type with SQLAlchemy if supported by the
database. Otherwise, the datetimes will be stored as timezone unaware
timestamps local to the original timezone.
References
1
https://docs.sqlalchemy.org
2
https://www.python.org/dev/peps/pep-0249/
Examples
Create an in-memory SQLite database.
>>> from sqlalchemy import create_engine
>>> engine = create_engine('sqlite://', echo=False)
Create a table from scratch with 3 rows.
>>> df = pd.DataFrame({'name' : ['User 1', 'User 2', 'User 3']})
>>> df
name
0 User 1
1 User 2
2 User 3
>>> df.to_sql('users', con=engine)
3
>>> engine.execute("SELECT * FROM users").fetchall()
[(0, 'User 1'), (1, 'User 2'), (2, 'User 3')]
An sqlalchemy.engine.Connection can also be passed to con:
>>> with engine.begin() as connection:
... df1 = pd.DataFrame({'name' : ['User 4', 'User 5']})
... df1.to_sql('users', con=connection, if_exists='append')
2
This is allowed to support operations that require that the same
DBAPI connection is used for the entire operation.
>>> df2 = pd.DataFrame({'name' : ['User 6', 'User 7']})
>>> df2.to_sql('users', con=engine, if_exists='append')
2
>>> engine.execute("SELECT * FROM users").fetchall()
[(0, 'User 1'), (1, 'User 2'), (2, 'User 3'),
(0, 'User 4'), (1, 'User 5'), (0, 'User 6'),
(1, 'User 7')]
Overwrite the table with just df2.
>>> df2.to_sql('users', con=engine, if_exists='replace',
... index_label='id')
2
>>> engine.execute("SELECT * FROM users").fetchall()
[(0, 'User 6'), (1, 'User 7')]
Specify the dtype (especially useful for integers with missing values).
Notice that while pandas is forced to store the data as floating point,
the database supports nullable integers. When fetching the data with
Python, we get back integer scalars.
>>> df = pd.DataFrame({"A": [1, None, 2]})
>>> df
A
0 1.0
1 NaN
2 2.0
>>> from sqlalchemy.types import Integer
>>> df.to_sql('integers', con=engine, index=False,
... dtype={"A": Integer()})
3
>>> engine.execute("SELECT * FROM integers").fetchall()
[(1,), (None,), (2,)]
| reference/api/pandas.DataFrame.to_sql.html |
pandas.api.extensions.ExtensionDtype.is_dtype | `pandas.api.extensions.ExtensionDtype.is_dtype`
Check if we match ‘dtype’.
The object to check. | classmethod ExtensionDtype.is_dtype(dtype)[source]#
Check if we match ‘dtype’.
Parameters
dtypeobjectThe object to check.
Returns
bool
Notes
The default implementation is True if
cls.construct_from_string(dtype) is an instance
of cls.
dtype is an object and is an instance of cls
dtype has a dtype attribute, and any of the above
conditions is true for dtype.dtype.
| reference/api/pandas.api.extensions.ExtensionDtype.is_dtype.html |
pandas.tseries.offsets.Second.rule_code | pandas.tseries.offsets.Second.rule_code | Second.rule_code#
| reference/api/pandas.tseries.offsets.Second.rule_code.html |
pandas.tseries.offsets.BQuarterBegin.apply_index | `pandas.tseries.offsets.BQuarterBegin.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead. | BQuarterBegin.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.BQuarterBegin.apply_index.html |
pandas.core.window.rolling.Window.std | `pandas.core.window.rolling.Window.std`
Calculate the rolling weighted window standard deviation. | Window.std(ddof=1, numeric_only=False, *args, **kwargs)[source]#
Calculate the rolling weighted window standard deviation.
New in version 1.0.0.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsKeyword arguments to configure the SciPy weighted window type.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.stdAggregating std for Series.
pandas.DataFrame.stdAggregating std for DataFrame.
| reference/api/pandas.core.window.rolling.Window.std.html |
pandas.DatetimeIndex.is_quarter_end | `pandas.DatetimeIndex.is_quarter_end`
Indicator for whether the date is the last day of a quarter.
```
>>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30",
... periods=4)})
>>> df.assign(quarter=df.dates.dt.quarter,
... is_quarter_end=df.dates.dt.is_quarter_end)
dates quarter is_quarter_end
0 2017-03-30 1 False
1 2017-03-31 1 True
2 2017-04-01 2 False
3 2017-04-02 2 False
``` | property DatetimeIndex.is_quarter_end[source]#
Indicator for whether the date is the last day of a quarter.
Returns
is_quarter_endSeries or DatetimeIndexThe same type as the original data with boolean values. Series will
have the same name and index. DatetimeIndex will have the same
name.
See also
quarterReturn the quarter of the date.
is_quarter_startSimilar property indicating the quarter start.
Examples
This method is available on Series with datetime values under
the .dt accessor, and directly on DatetimeIndex.
>>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30",
... periods=4)})
>>> df.assign(quarter=df.dates.dt.quarter,
... is_quarter_end=df.dates.dt.is_quarter_end)
dates quarter is_quarter_end
0 2017-03-30 1 False
1 2017-03-31 1 True
2 2017-04-01 2 False
3 2017-04-02 2 False
>>> idx = pd.date_range('2017-03-30', periods=4)
>>> idx
DatetimeIndex(['2017-03-30', '2017-03-31', '2017-04-01', '2017-04-02'],
dtype='datetime64[ns]', freq='D')
>>> idx.is_quarter_end
array([False, True, False, False])
| reference/api/pandas.DatetimeIndex.is_quarter_end.html |
pandas.Series.quantile | `pandas.Series.quantile`
Return value at the given quantile.
```
>>> s = pd.Series([1, 2, 3, 4])
>>> s.quantile(.5)
2.5
>>> s.quantile([.25, .5, .75])
0.25 1.75
0.50 2.50
0.75 3.25
dtype: float64
``` | Series.quantile(q=0.5, interpolation='linear')[source]#
Return value at the given quantile.
Parameters
qfloat or array-like, default 0.5 (50% quantile)The quantile(s) to compute, which can lie in range: 0 <= q <= 1.
interpolation{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points i and j:
linear: i + (j - i) * fraction, where fraction is the
fractional part of the index surrounded by i and j.
lower: i.
higher: j.
nearest: i or j whichever is nearest.
midpoint: (i + j) / 2.
Returns
float or SeriesIf q is an array, a Series will be returned where the
index is q and the values are the quantiles, otherwise
a float will be returned.
See also
core.window.Rolling.quantileCalculate the rolling quantile.
numpy.percentileReturns the q-th percentile(s) of the array elements.
Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s.quantile(.5)
2.5
>>> s.quantile([.25, .5, .75])
0.25 1.75
0.50 2.50
0.75 3.25
dtype: float64
| reference/api/pandas.Series.quantile.html |
pandas.DataFrame.reindex_like | `pandas.DataFrame.reindex_like`
Return an object with matching indices as other object.
```
>>> df1 = pd.DataFrame([[24.3, 75.7, 'high'],
... [31, 87.8, 'high'],
... [22, 71.6, 'medium'],
... [35, 95, 'medium']],
... columns=['temp_celsius', 'temp_fahrenheit',
... 'windspeed'],
... index=pd.date_range(start='2014-02-12',
... end='2014-02-15', freq='D'))
``` | DataFrame.reindex_like(other, method=None, copy=True, limit=None, tolerance=None)[source]#
Return an object with matching indices as other object.
Conform the object to the same index on all axes. Optional
filling logic, placing NaN in locations having no value
in the previous index. A new object is produced unless the
new index is equivalent to the current one and copy=False.
Parameters
otherObject of the same data typeIts row and column indices are used to define the new indices
of this object.
method{None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}Method to use for filling holes in reindexed DataFrame.
Please note: this is only applicable to DataFrames/Series with a
monotonically increasing/decreasing index.
None (default): don’t fill gaps
pad / ffill: propagate last valid observation forward to next
valid
backfill / bfill: use next valid observation to fill gap
nearest: use nearest valid observations to fill gap.
copybool, default TrueReturn a new object, even if the passed indexes are the same.
limitint, default NoneMaximum number of consecutive labels to fill for inexact matches.
toleranceoptionalMaximum distance between original and new labels for inexact
matches. The values of the index at the matching locations must
satisfy the equation abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance
to all values, or list-like, which applies variable tolerance per
element. List-like includes list, tuple, array, Series, and must be
the same size as the index and its dtype must exactly match the
index’s type.
Returns
Series or DataFrameSame type as caller, but with changed indices on each axis.
See also
DataFrame.set_indexSet row labels.
DataFrame.reset_indexRemove row labels or move them to new columns.
DataFrame.reindexChange to new indices or expand indices.
Notes
Same as calling
.reindex(index=other.index, columns=other.columns,...).
Examples
>>> df1 = pd.DataFrame([[24.3, 75.7, 'high'],
... [31, 87.8, 'high'],
... [22, 71.6, 'medium'],
... [35, 95, 'medium']],
... columns=['temp_celsius', 'temp_fahrenheit',
... 'windspeed'],
... index=pd.date_range(start='2014-02-12',
... end='2014-02-15', freq='D'))
>>> df1
temp_celsius temp_fahrenheit windspeed
2014-02-12 24.3 75.7 high
2014-02-13 31.0 87.8 high
2014-02-14 22.0 71.6 medium
2014-02-15 35.0 95.0 medium
>>> df2 = pd.DataFrame([[28, 'low'],
... [30, 'low'],
... [35.1, 'medium']],
... columns=['temp_celsius', 'windspeed'],
... index=pd.DatetimeIndex(['2014-02-12', '2014-02-13',
... '2014-02-15']))
>>> df2
temp_celsius windspeed
2014-02-12 28.0 low
2014-02-13 30.0 low
2014-02-15 35.1 medium
>>> df2.reindex_like(df1)
temp_celsius temp_fahrenheit windspeed
2014-02-12 28.0 NaN low
2014-02-13 30.0 NaN low
2014-02-14 NaN NaN NaN
2014-02-15 35.1 NaN medium
| reference/api/pandas.DataFrame.reindex_like.html |
pandas.tseries.offsets.WeekOfMonth.rule_code | pandas.tseries.offsets.WeekOfMonth.rule_code | WeekOfMonth.rule_code#
| reference/api/pandas.tseries.offsets.WeekOfMonth.rule_code.html |
pandas.Series.to_period | `pandas.Series.to_period`
Convert Series from DatetimeIndex to PeriodIndex. | Series.to_period(freq=None, copy=True)[source]#
Convert Series from DatetimeIndex to PeriodIndex.
Parameters
freqstr, default NoneFrequency associated with the PeriodIndex.
copybool, default TrueWhether or not to return a copy.
Returns
SeriesSeries with index converted to PeriodIndex.
| reference/api/pandas.Series.to_period.html |
pandas.tseries.offsets.CustomBusinessHour.is_on_offset | `pandas.tseries.offsets.CustomBusinessHour.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | CustomBusinessHour.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.CustomBusinessHour.is_on_offset.html |
pandas.api.types.is_int64_dtype | `pandas.api.types.is_int64_dtype`
Check whether the provided array or dtype is of the int64 dtype.
```
>>> is_int64_dtype(str)
False
>>> is_int64_dtype(np.int32)
False
>>> is_int64_dtype(np.int64)
True
>>> is_int64_dtype('int8')
False
>>> is_int64_dtype('Int8')
False
>>> is_int64_dtype(pd.Int64Dtype)
True
>>> is_int64_dtype(float)
False
>>> is_int64_dtype(np.uint64) # unsigned
False
>>> is_int64_dtype(np.array(['a', 'b']))
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.int64))
True
>>> is_int64_dtype(pd.Index([1, 2.])) # float
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False
``` | pandas.api.types.is_int64_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of the int64 dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
booleanWhether or not the array or dtype is of the int64 dtype.
Notes
Depending on system architecture, the return value of is_int64_dtype(
int) will be True if the OS uses 64-bit integers and False if the OS
uses 32-bit integers.
Examples
>>> is_int64_dtype(str)
False
>>> is_int64_dtype(np.int32)
False
>>> is_int64_dtype(np.int64)
True
>>> is_int64_dtype('int8')
False
>>> is_int64_dtype('Int8')
False
>>> is_int64_dtype(pd.Int64Dtype)
True
>>> is_int64_dtype(float)
False
>>> is_int64_dtype(np.uint64) # unsigned
False
>>> is_int64_dtype(np.array(['a', 'b']))
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.int64))
True
>>> is_int64_dtype(pd.Index([1, 2.])) # float
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False
| reference/api/pandas.api.types.is_int64_dtype.html |
pandas.IntervalIndex.set_closed | `pandas.IntervalIndex.set_closed`
Return an identical IntervalArray closed on the specified side.
```
>>> index = pd.arrays.IntervalArray.from_breaks(range(4))
>>> index
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
>>> index.set_closed('both')
<IntervalArray>
[[0, 1], [1, 2], [2, 3]]
Length: 3, dtype: interval[int64, both]
``` | IntervalIndex.set_closed(*args, **kwargs)[source]#
Return an identical IntervalArray closed on the specified side.
Parameters
closed{‘left’, ‘right’, ‘both’, ‘neither’}Whether the intervals are closed on the left-side, right-side, both
or neither.
Returns
new_indexIntervalArray
Examples
>>> index = pd.arrays.IntervalArray.from_breaks(range(4))
>>> index
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
>>> index.set_closed('both')
<IntervalArray>
[[0, 1], [1, 2], [2, 3]]
Length: 3, dtype: interval[int64, both]
| reference/api/pandas.IntervalIndex.set_closed.html |
General functions | Data manipulations#
melt(frame[, id_vars, value_vars, var_name, ...])
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
pivot(data, *[, index, columns, values])
Return reshaped DataFrame organized by given index / column values.
pivot_table(data[, values, index, columns, ...])
Create a spreadsheet-style pivot table as a DataFrame.
crosstab(index, columns[, values, rownames, ...])
Compute a simple cross tabulation of two (or more) factors.
cut(x, bins[, right, labels, retbins, ...])
Bin values into discrete intervals.
qcut(x, q[, labels, retbins, precision, ...])
Quantile-based discretization function.
merge(left, right[, how, on, left_on, ...])
Merge DataFrame or named Series objects with a database-style join.
merge_ordered(left, right[, on, left_on, ...])
Perform a merge for ordered data with optional filling/interpolation.
merge_asof(left, right[, on, left_on, ...])
Perform a merge by key distance.
concat(objs, *[, axis, join, ignore_index, ...])
Concatenate pandas objects along a particular axis.
get_dummies(data[, prefix, prefix_sep, ...])
Convert categorical variable into dummy/indicator variables.
from_dummies(data[, sep, default_category])
Create a categorical DataFrame from a DataFrame of dummy variables.
factorize(values[, sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
unique(values)
Return unique values based on a hash table.
wide_to_long(df, stubnames, i, j[, sep, suffix])
Unpivot a DataFrame from wide to long format.
Top-level missing data#
isna(obj)
Detect missing values for an array-like object.
isnull(obj)
Detect missing values for an array-like object.
notna(obj)
Detect non-missing values for an array-like object.
notnull(obj)
Detect non-missing values for an array-like object.
Top-level dealing with numeric data#
to_numeric(arg[, errors, downcast])
Convert argument to a numeric type.
Top-level dealing with datetimelike data#
to_datetime(arg[, errors, dayfirst, ...])
Convert argument to datetime.
to_timedelta(arg[, unit, errors])
Convert argument to timedelta.
date_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex.
bdate_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex with business day as the default.
period_range([start, end, periods, freq, name])
Return a fixed frequency PeriodIndex.
timedelta_range([start, end, periods, freq, ...])
Return a fixed frequency TimedeltaIndex with day as the default.
infer_freq(index[, warn])
Infer the most likely frequency given the input index.
Top-level dealing with Interval data#
interval_range([start, end, periods, freq, ...])
Return a fixed frequency IntervalIndex.
Top-level evaluation#
eval(expr[, parser, engine, truediv, ...])
Evaluate a Python expression as a string using various backends.
Hashing#
util.hash_array(vals[, encoding, hash_key, ...])
Given a 1d array, return an array of deterministic integers.
util.hash_pandas_object(obj[, index, ...])
Return a data hash of the Index/Series/DataFrame.
Importing from other DataFrame libraries#
api.interchange.from_dataframe(df[, allow_copy])
Build a pd.DataFrame from any DataFrame supporting the interchange protocol.
| reference/general_functions.html | null |
pandas.errors.NullFrequencyError | `pandas.errors.NullFrequencyError`
Exception raised when a freq cannot be null. | exception pandas.errors.NullFrequencyError[source]#
Exception raised when a freq cannot be null.
Particularly DatetimeIndex.shift, TimedeltaIndex.shift,
PeriodIndex.shift.
| reference/api/pandas.errors.NullFrequencyError.html |
pandas.Timestamp.tz | `pandas.Timestamp.tz`
Alias for tzinfo.
```
>>> ts = pd.Timestamp(1584226800, unit='s', tz='Europe/Stockholm')
>>> ts.tz
<DstTzInfo 'Europe/Stockholm' CET+1:00:00 STD>
``` | property Timestamp.tz#
Alias for tzinfo.
Examples
>>> ts = pd.Timestamp(1584226800, unit='s', tz='Europe/Stockholm')
>>> ts.tz
<DstTzInfo 'Europe/Stockholm' CET+1:00:00 STD>
| reference/api/pandas.Timestamp.tz.html |
pandas.test | `pandas.test`
Run the pandas test suite using pytest.
By default, runs with the marks –skip-slow, –skip-network, –skip-db | pandas.test(extra_args=None)[source]#
Run the pandas test suite using pytest.
By default, runs with the marks –skip-slow, –skip-network, –skip-db
Parameters
extra_argslist[str], default NoneExtra marks to run the tests.
| reference/api/pandas.test.html |
pandas.core.resample.Resampler.interpolate | `pandas.core.resample.Resampler.interpolate`
Interpolate values according to different methods.
```
>>> s = pd.Series([0, 1, np.nan, 3])
>>> s
0 0.0
1 1.0
2 NaN
3 3.0
dtype: float64
>>> s.interpolate()
0 0.0
1 1.0
2 2.0
3 3.0
dtype: float64
``` | Resampler.interpolate(method='linear', *, axis=0, limit=None, inplace=False, limit_direction='forward', limit_area=None, downcast=None, **kwargs)[source]#
Interpolate values according to different methods.
Fill NaN values using an interpolation method.
Please note that only method='linear' is supported for
DataFrame/Series with a MultiIndex.
Parameters
methodstr, default ‘linear’Interpolation technique to use. One of:
‘linear’: Ignore the index and treat the values as equally
spaced. This is the only method supported on MultiIndexes.
‘time’: Works on daily and higher resolution data to interpolate
given length of interval.
‘index’, ‘values’: use the actual numerical values of the index.
‘pad’: Fill in NaNs using existing values.
‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘spline’,
‘barycentric’, ‘polynomial’: Passed to
scipy.interpolate.interp1d. These methods use the numerical
values of the index. Both ‘polynomial’ and ‘spline’ require that
you also specify an order (int), e.g.
df.interpolate(method='polynomial', order=5).
‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’, ‘akima’,
‘cubicspline’: Wrappers around the SciPy interpolation methods of
similar names. See Notes.
‘from_derivatives’: Refers to
scipy.interpolate.BPoly.from_derivatives which
replaces ‘piecewise_polynomial’ interpolation method in
scipy 0.18.
axis{{0 or ‘index’, 1 or ‘columns’, None}}, default NoneAxis to interpolate along. For Series this parameter is unused
and defaults to 0.
limitint, optionalMaximum number of consecutive NaNs to fill. Must be greater than
0.
inplacebool, default FalseUpdate the data in place if possible.
limit_direction{{‘forward’, ‘backward’, ‘both’}}, OptionalConsecutive NaNs will be filled in this direction.
If limit is specified:
If ‘method’ is ‘pad’ or ‘ffill’, ‘limit_direction’ must be ‘forward’.
If ‘method’ is ‘backfill’ or ‘bfill’, ‘limit_direction’ must be
‘backwards’.
If ‘limit’ is not specified:
If ‘method’ is ‘backfill’ or ‘bfill’, the default is ‘backward’
else the default is ‘forward’
Changed in version 1.1.0: raises ValueError if limit_direction is ‘forward’ or ‘both’ and
method is ‘backfill’ or ‘bfill’.
raises ValueError if limit_direction is ‘backward’ or ‘both’ and
method is ‘pad’ or ‘ffill’.
limit_area{{None, ‘inside’, ‘outside’}}, default NoneIf limit is specified, consecutive NaNs will be filled with this
restriction.
None: No fill restriction.
‘inside’: Only fill NaNs surrounded by valid values
(interpolate).
‘outside’: Only fill NaNs outside valid values (extrapolate).
downcastoptional, ‘infer’ or None, defaults to NoneDowncast dtypes if possible.
``**kwargs``optionalKeyword arguments to pass on to the interpolating function.
Returns
Series or DataFrame or NoneReturns the same object type as the caller, interpolated at
some or all NaN values or None if inplace=True.
See also
fillnaFill missing values using different methods.
scipy.interpolate.Akima1DInterpolatorPiecewise cubic polynomials (Akima interpolator).
scipy.interpolate.BPoly.from_derivativesPiecewise polynomial in the Bernstein basis.
scipy.interpolate.interp1dInterpolate a 1-D function.
scipy.interpolate.KroghInterpolatorInterpolate polynomial (Krogh interpolator).
scipy.interpolate.PchipInterpolatorPCHIP 1-d monotonic cubic interpolation.
scipy.interpolate.CubicSplineCubic spline data interpolator.
Notes
The ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’ and ‘akima’
methods are wrappers around the respective SciPy implementations of
similar names. These use the actual numerical values of the index.
For more information on their behavior, see the
SciPy documentation.
Examples
Filling in NaN in a Series via linear
interpolation.
>>> s = pd.Series([0, 1, np.nan, 3])
>>> s
0 0.0
1 1.0
2 NaN
3 3.0
dtype: float64
>>> s.interpolate()
0 0.0
1 1.0
2 2.0
3 3.0
dtype: float64
Filling in NaN in a Series by padding, but filling at most two
consecutive NaN at a time.
>>> s = pd.Series([np.nan, "single_one", np.nan,
... "fill_two_more", np.nan, np.nan, np.nan,
... 4.71, np.nan])
>>> s
0 NaN
1 single_one
2 NaN
3 fill_two_more
4 NaN
5 NaN
6 NaN
7 4.71
8 NaN
dtype: object
>>> s.interpolate(method='pad', limit=2)
0 NaN
1 single_one
2 single_one
3 fill_two_more
4 fill_two_more
5 fill_two_more
6 NaN
7 4.71
8 4.71
dtype: object
Filling in NaN in a Series via polynomial interpolation or splines:
Both ‘polynomial’ and ‘spline’ methods require that you also specify
an order (int).
>>> s = pd.Series([0, 2, np.nan, 8])
>>> s.interpolate(method='polynomial', order=2)
0 0.000000
1 2.000000
2 4.666667
3 8.000000
dtype: float64
Fill the DataFrame forward (that is, going down) along each column
using linear interpolation.
Note how the last entry in column ‘a’ is interpolated differently,
because there is no entry after it to use for interpolation.
Note how the first entry in column ‘b’ remains NaN, because there
is no entry before it to use for interpolation.
>>> df = pd.DataFrame([(0.0, np.nan, -1.0, 1.0),
... (np.nan, 2.0, np.nan, np.nan),
... (2.0, 3.0, np.nan, 9.0),
... (np.nan, 4.0, -4.0, 16.0)],
... columns=list('abcd'))
>>> df
a b c d
0 0.0 NaN -1.0 1.0
1 NaN 2.0 NaN NaN
2 2.0 3.0 NaN 9.0
3 NaN 4.0 -4.0 16.0
>>> df.interpolate(method='linear', limit_direction='forward', axis=0)
a b c d
0 0.0 NaN -1.0 1.0
1 1.0 2.0 -2.0 5.0
2 2.0 3.0 -3.0 9.0
3 2.0 4.0 -4.0 16.0
Using polynomial interpolation.
>>> df['d'].interpolate(method='polynomial', order=2)
0 1.0
1 4.0
2 9.0
3 16.0
Name: d, dtype: float64
| reference/api/pandas.core.resample.Resampler.interpolate.html |
pandas.melt | `pandas.melt`
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
```
>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
... 'B': {0: 1, 1: 3, 2: 5},
... 'C': {0: 2, 1: 4, 2: 6}})
>>> df
A B C
0 a 1 2
1 b 3 4
2 c 5 6
``` | pandas.melt(frame, id_vars=None, value_vars=None, var_name=None, value_name='value', col_level=None, ignore_index=True)[source]#
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
This function is useful to massage a DataFrame into a format where one
or more columns are identifier variables (id_vars), while all other
columns, considered measured variables (value_vars), are “unpivoted” to
the row axis, leaving just two non-identifier columns, ‘variable’ and
‘value’.
Parameters
id_varstuple, list, or ndarray, optionalColumn(s) to use as identifier variables.
value_varstuple, list, or ndarray, optionalColumn(s) to unpivot. If not specified, uses all columns that
are not set as id_vars.
var_namescalarName to use for the ‘variable’ column. If None it uses
frame.columns.name or ‘variable’.
value_namescalar, default ‘value’Name to use for the ‘value’ column.
col_levelint or str, optionalIf columns are a MultiIndex then use this level to melt.
ignore_indexbool, default TrueIf True, original index is ignored. If False, the original index is retained.
Index labels will be repeated as necessary.
New in version 1.1.0.
Returns
DataFrameUnpivoted DataFrame.
See also
DataFrame.meltIdentical method.
pivot_tableCreate a spreadsheet-style pivot table as a DataFrame.
DataFrame.pivotReturn reshaped DataFrame organized by given index / column values.
DataFrame.explodeExplode a DataFrame from list-like columns to long format.
Notes
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
... 'B': {0: 1, 1: 3, 2: 5},
... 'C': {0: 2, 1: 4, 2: 6}})
>>> df
A B C
0 a 1 2
1 b 3 4
2 c 5 6
>>> pd.melt(df, id_vars=['A'], value_vars=['B'])
A variable value
0 a B 1
1 b B 3
2 c B 5
>>> pd.melt(df, id_vars=['A'], value_vars=['B', 'C'])
A variable value
0 a B 1
1 b B 3
2 c B 5
3 a C 2
4 b C 4
5 c C 6
The names of ‘variable’ and ‘value’ columns can be customized:
>>> pd.melt(df, id_vars=['A'], value_vars=['B'],
... var_name='myVarname', value_name='myValname')
A myVarname myValname
0 a B 1
1 b B 3
2 c B 5
Original index values can be kept around:
>>> pd.melt(df, id_vars=['A'], value_vars=['B', 'C'], ignore_index=False)
A variable value
0 a B 1
1 b B 3
2 c B 5
0 a C 2
1 b C 4
2 c C 6
If you have multi-index columns:
>>> df.columns = [list('ABC'), list('DEF')]
>>> df
A B C
D E F
0 a 1 2
1 b 3 4
2 c 5 6
>>> pd.melt(df, col_level=0, id_vars=['A'], value_vars=['B'])
A variable value
0 a B 1
1 b B 3
2 c B 5
>>> pd.melt(df, id_vars=[('A', 'D')], value_vars=[('B', 'E')])
(A, D) variable_0 variable_1 value
0 a B E 1
1 b B E 3
2 c B E 5
| reference/api/pandas.melt.html |
pandas.tseries.offsets.BusinessMonthEnd.copy | `pandas.tseries.offsets.BusinessMonthEnd.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
``` | BusinessMonthEnd.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
| reference/api/pandas.tseries.offsets.BusinessMonthEnd.copy.html |
pandas.TimedeltaIndex.seconds | `pandas.TimedeltaIndex.seconds`
Number of seconds (>= 0 and less than 1 day) for each element. | property TimedeltaIndex.seconds[source]#
Number of seconds (>= 0 and less than 1 day) for each element.
| reference/api/pandas.TimedeltaIndex.seconds.html |
pandas.TimedeltaIndex.to_series | `pandas.TimedeltaIndex.to_series`
Create a Series with both index and values equal to the index keys.
Useful with map for returning an indexer based on an index.
```
>>> idx = pd.Index(['Ant', 'Bear', 'Cow'], name='animal')
``` | TimedeltaIndex.to_series(index=None, name=None)[source]#
Create a Series with both index and values equal to the index keys.
Useful with map for returning an indexer based on an index.
Parameters
indexIndex, optionalIndex of resulting Series. If None, defaults to original index.
namestr, optionalName of resulting Series. If None, defaults to name of original
index.
Returns
SeriesThe dtype will be based on the type of the Index values.
See also
Index.to_frameConvert an Index to a DataFrame.
Series.to_frameConvert Series to DataFrame.
Examples
>>> idx = pd.Index(['Ant', 'Bear', 'Cow'], name='animal')
By default, the original Index and original name is reused.
>>> idx.to_series()
animal
Ant Ant
Bear Bear
Cow Cow
Name: animal, dtype: object
To enforce a new Index, specify new labels to index:
>>> idx.to_series(index=[0, 1, 2])
0 Ant
1 Bear
2 Cow
Name: animal, dtype: object
To override the name of the resulting column, specify name:
>>> idx.to_series(name='zoo')
animal
Ant Ant
Bear Bear
Cow Cow
Name: zoo, dtype: object
| reference/api/pandas.TimedeltaIndex.to_series.html |
pandas.DataFrame.itertuples | `pandas.DataFrame.itertuples`
Iterate over DataFrame rows as namedtuples.
If True, return the index as the first element of the tuple.
```
>>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]},
... index=['dog', 'hawk'])
>>> df
num_legs num_wings
dog 4 0
hawk 2 2
>>> for row in df.itertuples():
... print(row)
...
Pandas(Index='dog', num_legs=4, num_wings=0)
Pandas(Index='hawk', num_legs=2, num_wings=2)
``` | DataFrame.itertuples(index=True, name='Pandas')[source]#
Iterate over DataFrame rows as namedtuples.
Parameters
indexbool, default TrueIf True, return the index as the first element of the tuple.
namestr or None, default “Pandas”The name of the returned namedtuples or None to return regular
tuples.
Returns
iteratorAn object to iterate over namedtuples for each row in the
DataFrame with the first field possibly being the index and
following fields being the column values.
See also
DataFrame.iterrowsIterate over DataFrame rows as (index, Series) pairs.
DataFrame.itemsIterate over (column name, Series) pairs.
Notes
The column names will be renamed to positional names if they are
invalid Python identifiers, repeated, or start with an underscore.
Examples
>>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]},
... index=['dog', 'hawk'])
>>> df
num_legs num_wings
dog 4 0
hawk 2 2
>>> for row in df.itertuples():
... print(row)
...
Pandas(Index='dog', num_legs=4, num_wings=0)
Pandas(Index='hawk', num_legs=2, num_wings=2)
By setting the index parameter to False we can remove the index
as the first element of the tuple:
>>> for row in df.itertuples(index=False):
... print(row)
...
Pandas(num_legs=4, num_wings=0)
Pandas(num_legs=2, num_wings=2)
With the name parameter set we set a custom name for the yielded
namedtuples:
>>> for row in df.itertuples(name='Animal'):
... print(row)
...
Animal(Index='dog', num_legs=4, num_wings=0)
Animal(Index='hawk', num_legs=2, num_wings=2)
| reference/api/pandas.DataFrame.itertuples.html |
pandas.DataFrame.lt | `pandas.DataFrame.lt`
Get Less than of dataframe and other, element-wise (binary operator lt).
```
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
``` | DataFrame.lt(other, axis='columns', level=None)[source]#
Get Less than of dataframe and other, element-wise (binary operator lt).
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison
operators.
Equivalent to ==, !=, <=, <, >=, > with support to choose axis
(rows or columns) and level for comparison.
Parameters
otherscalar, sequence, Series, or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’Whether to compare by the index (0 or ‘index’) or columns
(1 or ‘columns’).
levelint or labelBroadcast across a level, matching Index values on the passed
MultiIndex level.
Returns
DataFrame of boolResult of the comparison.
See also
DataFrame.eqCompare DataFrames for equality elementwise.
DataFrame.neCompare DataFrames for inequality elementwise.
DataFrame.leCompare DataFrames for less than inequality or equality elementwise.
DataFrame.ltCompare DataFrames for strictly less than inequality elementwise.
DataFrame.geCompare DataFrames for greater than inequality or equality elementwise.
DataFrame.gtCompare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together.
NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100
cost revenue
A False True
B False False
C True False
>>> df.eq(100)
cost revenue
A False True
B False False
C True False
When other is a Series, the columns of a DataFrame are aligned
with the index of other and broadcast:
>>> df != pd.Series([100, 250], index=["cost", "revenue"])
cost revenue
A True True
B True False
C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
cost revenue
A True False
B True True
C True True
D True True
When comparing to an arbitrary sequence, the number of columns must
match the number elements in other:
>>> df == [250, 100]
cost revenue
A True True
B False False
C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index')
cost revenue
A True False
B False True
C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
... index=['A', 'B', 'C', 'D'])
>>> other
revenue
A 300
B 250
C 100
D 150
>>> df.gt(other)
cost revenue
A False False
B False False
C False True
D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
... 'revenue': [100, 250, 300, 200, 175, 225]},
... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
... ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
cost revenue
Q1 A 250 100
B 150 250
C 100 300
Q2 A 150 200
B 300 175
C 220 225
>>> df.le(df_multindex, level=1)
cost revenue
Q1 A True True
B True True
C True True
Q2 A False True
B True False
C True False
| reference/api/pandas.DataFrame.lt.html |
pandas.tseries.offsets.FY5253.variation | pandas.tseries.offsets.FY5253.variation | FY5253.variation#
| reference/api/pandas.tseries.offsets.FY5253.variation.html |
pandas.Series.drop_duplicates | `pandas.Series.drop_duplicates`
Return Series with duplicate values removed.
```
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'],
... name='animal')
>>> s
0 lama
1 cow
2 lama
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
``` | Series.drop_duplicates(*, keep='first', inplace=False)[source]#
Return Series with duplicate values removed.
Parameters
keep{‘first’, ‘last’, False}, default ‘first’Method to handle dropping duplicates:
‘first’ : Drop duplicates except for the first occurrence.
‘last’ : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
inplacebool, default FalseIf True, performs operation inplace and returns None.
Returns
Series or NoneSeries with duplicates dropped or None if inplace=True.
See also
Index.drop_duplicatesEquivalent method on Index.
DataFrame.drop_duplicatesEquivalent method on DataFrame.
Series.duplicatedRelated method on Series, indicating duplicate Series values.
Series.uniqueReturn unique values as an array.
Examples
Generate a Series with duplicated entries.
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'],
... name='animal')
>>> s
0 lama
1 cow
2 lama
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
With the ‘keep’ parameter, the selection behaviour of duplicated values
can be changed. The value ‘first’ keeps the first occurrence for each
set of duplicated entries. The default value of keep is ‘first’.
>>> s.drop_duplicates()
0 lama
1 cow
3 beetle
5 hippo
Name: animal, dtype: object
The value ‘last’ for parameter ‘keep’ keeps the last occurrence for
each set of duplicated entries.
>>> s.drop_duplicates(keep='last')
1 cow
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
The value False for parameter ‘keep’ discards all sets of
duplicated entries. Setting the value of ‘inplace’ to True performs
the operation inplace and returns None.
>>> s.drop_duplicates(keep=False, inplace=True)
>>> s
1 cow
3 beetle
5 hippo
Name: animal, dtype: object
| reference/api/pandas.Series.drop_duplicates.html |
pandas.tseries.offsets.QuarterBegin.startingMonth | pandas.tseries.offsets.QuarterBegin.startingMonth | QuarterBegin.startingMonth#
| reference/api/pandas.tseries.offsets.QuarterBegin.startingMonth.html |
pandas.io.formats.style.Styler.format_index | `pandas.io.formats.style.Styler.format_index`
Format the text display value of index labels or column headers.
New in version 1.4.0.
```
>>> df = pd.DataFrame([[1, 2, 3]], columns=[2.0, np.nan, 4.0])
>>> df.style.format_index(axis=1, na_rep='MISS', precision=3)
2.000 MISS 4.000
0 1 2 3
``` | Styler.format_index(formatter=None, axis=0, level=None, na_rep=None, precision=None, decimal='.', thousands=None, escape=None, hyperlinks=None)[source]#
Format the text display value of index labels or column headers.
New in version 1.4.0.
Parameters
formatterstr, callable, dict or NoneObject to define how values are displayed. See notes.
axis{0, “index”, 1, “columns”}Whether to apply the formatter to the index or column headers.
levelint, str, listThe level(s) over which to apply the generic formatter.
na_repstr, optionalRepresentation for missing values.
If na_rep is None, no special formatting is applied.
precisionint, optionalFloating point precision to use for display purposes, if not determined by
the specified formatter.
decimalstr, default “.”Character used as decimal separator for floats, complex and integers.
thousandsstr, optional, default NoneCharacter used as thousands separator for floats, complex and integers.
escapestr, optionalUse ‘html’ to replace the characters &, <, >, ', and "
in cell display string with HTML-safe sequences.
Use ‘latex’ to replace the characters &, %, $, #, _,
{, }, ~, ^, and \ in the cell display string with
LaTeX-safe sequences.
Escaping is done before formatter.
hyperlinks{“html”, “latex”}, optionalConvert string patterns containing https://, http://, ftp:// or www. to
HTML <a> tags as clickable URL hyperlinks if “html”, or LaTeX href
commands if “latex”.
Returns
selfStyler
See also
Styler.formatFormat the text display value of data cells.
Notes
This method assigns a formatting function, formatter, to each level label
in the DataFrame’s index or column headers. If formatter is None,
then the default formatter is used.
If a callable then that function should take a label value as input and return
a displayable representation, such as a string. If formatter is
given as a string this is assumed to be a valid Python format specification
and is wrapped to a callable as string.format(x). If a dict is given,
keys should correspond to MultiIndex level numbers or names, and values should
be string or callable, as above.
The default formatter currently expresses floats and complex numbers with the
pandas display precision unless using the precision argument here. The
default formatter does not adjust the representation of missing values unless
the na_rep argument is used.
The level argument defines which levels of a MultiIndex to apply the
method to. If the formatter argument is given in dict form but does
not include all levels within the level argument then these unspecified levels
will have the default formatter applied. Any levels in the formatter dict
specifically excluded from the level argument will be ignored.
When using a formatter string the dtypes must be compatible, otherwise a
ValueError will be raised.
Warning
Styler.format_index is ignored when using the output format
Styler.to_excel, since Excel and Python have inherrently different
formatting structures.
However, it is possible to use the number-format pseudo CSS attribute
to force Excel permissible formatting. See documentation for Styler.format.
Examples
Using na_rep and precision with the default formatter
>>> df = pd.DataFrame([[1, 2, 3]], columns=[2.0, np.nan, 4.0])
>>> df.style.format_index(axis=1, na_rep='MISS', precision=3)
2.000 MISS 4.000
0 1 2 3
Using a formatter specification on consistent dtypes in a level
>>> df.style.format_index('{:.2f}', axis=1, na_rep='MISS')
2.00 MISS 4.00
0 1 2 3
Using the default formatter for unspecified levels
>>> df = pd.DataFrame([[1, 2, 3]],
... columns=pd.MultiIndex.from_arrays([["a", "a", "b"],[2, np.nan, 4]]))
>>> df.style.format_index({0: lambda v: upper(v)}, axis=1, precision=1)
...
A B
2.0 nan 4.0
0 1 2 3
Using a callable formatter function.
>>> func = lambda s: 'STRING' if isinstance(s, str) else 'FLOAT'
>>> df.style.format_index(func, axis=1, na_rep='MISS')
...
STRING STRING
FLOAT MISS FLOAT
0 1 2 3
Using a formatter with HTML escape and na_rep.
>>> df = pd.DataFrame([[1, 2, 3]], columns=['"A"', 'A&B', None])
>>> s = df.style.format_index('$ {0}', axis=1, escape="html", na_rep="NA")
...
<th .. >$ "A"</th>
<th .. >$ A&B</th>
<th .. >NA</td>
...
Using a formatter with LaTeX escape.
>>> df = pd.DataFrame([[1, 2, 3]], columns=["123", "~", "$%#"])
>>> df.style.format_index("\\textbf{{{}}}", escape="latex", axis=1).to_latex()
...
\begin{tabular}{lrrr}
{} & {\textbf{123}} & {\textbf{\textasciitilde }} & {\textbf{\$\%\#}} \\
0 & 1 & 2 & 3 \\
\end{tabular}
| reference/api/pandas.io.formats.style.Styler.format_index.html |
pandas.tseries.offsets.DateOffset.is_anchored | `pandas.tseries.offsets.DateOffset.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | DateOffset.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.DateOffset.is_anchored.html |
pandas.Timestamp.dayofyear | `pandas.Timestamp.dayofyear`
Return the day of the year.
```
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.day_of_year
74
``` | Timestamp.dayofyear#
Return the day of the year.
Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.day_of_year
74
| reference/api/pandas.Timestamp.dayofyear.html |
pandas.tseries.offsets.Minute.apply | pandas.tseries.offsets.Minute.apply | Minute.apply()#
| reference/api/pandas.tseries.offsets.Minute.apply.html |
pandas.Period.end_time | `pandas.Period.end_time`
Get the Timestamp for the end of the period. | Period.end_time#
Get the Timestamp for the end of the period.
Returns
Timestamp
See also
Period.start_timeReturn the start Timestamp.
Period.dayofyearReturn the day of year.
Period.daysinmonthReturn the days in that month.
Period.dayofweekReturn the day of the week.
| reference/api/pandas.Period.end_time.html |
pandas.tseries.offsets.FY5253Quarter.base | `pandas.tseries.offsets.FY5253Quarter.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal. | FY5253Quarter.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
| reference/api/pandas.tseries.offsets.FY5253Quarter.base.html |
pandas.core.groupby.DataFrameGroupBy.size | `pandas.core.groupby.DataFrameGroupBy.size`
Compute group sizes. | DataFrameGroupBy.size()[source]#
Compute group sizes.
Returns
DataFrame or SeriesNumber of rows in each group as a Series if as_index is True
or a DataFrame if as_index is False.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
| reference/api/pandas.core.groupby.DataFrameGroupBy.size.html |
pandas.tseries.offsets.LastWeekOfMonth | `pandas.tseries.offsets.LastWeekOfMonth`
Describes monthly dates in last week of month.
For example “the last Tuesday of each month”.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.LastWeekOfMonth()
Timestamp('2022-01-31 00:00:00')
``` | class pandas.tseries.offsets.LastWeekOfMonth#
Describes monthly dates in last week of month.
For example “the last Tuesday of each month”.
Parameters
nint, default 1
weekdayint {0, 1, …, 6}, default 0A specific integer for the day of the week.
0 is Monday
1 is Tuesday
2 is Wednesday
3 is Thursday
4 is Friday
5 is Saturday
6 is Sunday.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.LastWeekOfMonth()
Timestamp('2022-01-31 00:00:00')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
n
nanos
normalize
rule_code
week
weekday
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
| reference/api/pandas.tseries.offsets.LastWeekOfMonth.html |
pandas.tseries.offsets.Nano.name | `pandas.tseries.offsets.Nano.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
``` | Nano.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
| reference/api/pandas.tseries.offsets.Nano.name.html |
pandas.tseries.offsets.YearBegin.rollforward | `pandas.tseries.offsets.YearBegin.rollforward`
Roll provided date forward to next offset only if not on offset. | YearBegin.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.YearBegin.rollforward.html |
pandas.tseries.offsets.Day.kwds | `pandas.tseries.offsets.Day.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
``` | Day.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
| reference/api/pandas.tseries.offsets.Day.kwds.html |
pandas.Index.array | `pandas.Index.array`
The ExtensionArray of the data backing this Series or Index.
An ExtensionArray of the values stored within. For extension
types, this is the actual array. For NumPy native types, this
is a thin (no copy) wrapper around numpy.ndarray.
```
>>> pd.Series([1, 2, 3]).array
<PandasArray>
[1, 2, 3]
Length: 3, dtype: int64
``` | Index.array[source]#
The ExtensionArray of the data backing this Series or Index.
Returns
ExtensionArrayAn ExtensionArray of the values stored within. For extension
types, this is the actual array. For NumPy native types, this
is a thin (no copy) wrapper around numpy.ndarray.
.array differs .values which may require converting the
data to a different form.
See also
Index.to_numpySimilar method that always returns a NumPy array.
Series.to_numpySimilar method that always returns a NumPy array.
Notes
This table lays out the different array types for each extension
dtype within pandas.
dtype
array type
category
Categorical
period
PeriodArray
interval
IntervalArray
IntegerNA
IntegerArray
string
StringArray
boolean
BooleanArray
datetime64[ns, tz]
DatetimeArray
For any 3rd-party extension types, the array type will be an
ExtensionArray.
For all remaining dtypes .array will be a
arrays.NumpyExtensionArray wrapping the actual ndarray
stored within. If you absolutely need a NumPy array (possibly with
copying / coercing data), then use Series.to_numpy() instead.
Examples
For regular NumPy types like int, and float, a PandasArray
is returned.
>>> pd.Series([1, 2, 3]).array
<PandasArray>
[1, 2, 3]
Length: 3, dtype: int64
For extension types, like Categorical, the actual ExtensionArray
is returned
>>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))
>>> ser.array
['a', 'b', 'a']
Categories (2, object): ['a', 'b']
| reference/api/pandas.Index.array.html |
pandas.DataFrame.to_hdf | `pandas.DataFrame.to_hdf`
Write the contained data to an HDF5 file using HDFStore.
```
>>> df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]},
... index=['a', 'b', 'c'])
>>> df.to_hdf('data.h5', key='df', mode='w')
``` | DataFrame.to_hdf(path_or_buf, key, mode='a', complevel=None, complib=None, append=False, format=None, index=True, min_itemsize=None, nan_rep=None, dropna=None, data_columns=None, errors='strict', encoding='UTF-8')[source]#
Write the contained data to an HDF5 file using HDFStore.
Hierarchical Data Format (HDF) is self-describing, allowing an
application to interpret the structure and contents of a file with
no outside information. One HDF file can hold a mix of related objects
which can be accessed as a group or as individual objects.
In order to add another DataFrame or Series to an existing HDF file
please use append mode and a different a key.
Warning
One can store a subclass of DataFrame or Series to HDF5,
but the type of the subclass is lost upon storing.
For more information see the user guide.
Parameters
path_or_bufstr or pandas.HDFStoreFile path or HDFStore object.
keystrIdentifier for the group in the store.
mode{‘a’, ‘w’, ‘r+’}, default ‘a’Mode to open file:
‘w’: write, a new file is created (an existing file with
the same name would be deleted).
‘a’: append, an existing file is opened for reading and
writing, and if the file does not exist it is created.
‘r+’: similar to ‘a’, but the file must already exist.
complevel{0-9}, default NoneSpecifies a compression level for data.
A value of 0 or None disables compression.
complib{‘zlib’, ‘lzo’, ‘bzip2’, ‘blosc’}, default ‘zlib’Specifies the compression library to be used.
As of v0.20.2 these additional compressors for Blosc are supported
(default if no compressor specified: ‘blosc:blosclz’):
{‘blosc:blosclz’, ‘blosc:lz4’, ‘blosc:lz4hc’, ‘blosc:snappy’,
‘blosc:zlib’, ‘blosc:zstd’}.
Specifying a compression library which is not available issues
a ValueError.
appendbool, default FalseFor Table formats, append the input data to the existing.
format{‘fixed’, ‘table’, None}, default ‘fixed’Possible values:
‘fixed’: Fixed format. Fast writing/reading. Not-appendable,
nor searchable.
‘table’: Table format. Write as a PyTables Table structure
which may perform worse but allow more flexible operations
like searching / selecting subsets of the data.
If None, pd.get_option(‘io.hdf.default_format’) is checked,
followed by fallback to “fixed”.
indexbool, default TrueWrite DataFrame index as a column.
min_itemsizedict or int, optionalMap column names to minimum string sizes for columns.
nan_repAny, optionalHow to represent null values as str.
Not allowed with append=True.
dropnabool, default False, optionalRemove missing values.
data_columnslist of columns or True, optionalList of columns to create as indexed data columns for on-disk
queries, or True to use all columns. By default only the axes
of the object are indexed. See
Query via data columns. for
more information.
Applicable only to format=’table’.
errorsstr, default ‘strict’Specifies how encoding and decoding errors are to be handled.
See the errors argument for open() for a full list
of options.
encodingstr, default “UTF-8”
See also
read_hdfRead from HDF file.
DataFrame.to_orcWrite a DataFrame to the binary orc format.
DataFrame.to_parquetWrite a DataFrame to the binary parquet format.
DataFrame.to_sqlWrite to a SQL table.
DataFrame.to_featherWrite out feather-format for DataFrames.
DataFrame.to_csvWrite out to a csv file.
Examples
>>> df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]},
... index=['a', 'b', 'c'])
>>> df.to_hdf('data.h5', key='df', mode='w')
We can add another object to the same file:
>>> s = pd.Series([1, 2, 3, 4])
>>> s.to_hdf('data.h5', key='s')
Reading from HDF file:
>>> pd.read_hdf('data.h5', 'df')
A B
a 1 4
b 2 5
c 3 6
>>> pd.read_hdf('data.h5', 's')
0 1
1 2
2 3
3 4
dtype: int64
| reference/api/pandas.DataFrame.to_hdf.html |
pandas.DataFrame.isetitem | `pandas.DataFrame.isetitem`
Set the given value in the column with position ‘loc’. | DataFrame.isetitem(loc, value)[source]#
Set the given value in the column with position ‘loc’.
This is a positional analogue to __setitem__.
Parameters
locint or sequence of ints
valuescalar or arraylike
Notes
Unlike frame.iloc[:, i] = value, frame.isetitem(loc, value) will
_never_ try to set the values in place, but will always insert a new
array.
In cases where frame.columns is unique, this is equivalent to
frame[frame.columns[i]] = value.
| reference/api/pandas.DataFrame.isetitem.html |
pandas.DataFrame.iteritems | `pandas.DataFrame.iteritems`
Iterate over (column name, Series) pairs. | DataFrame.iteritems()[source]#
Iterate over (column name, Series) pairs.
Deprecated since version 1.5.0: iteritems is deprecated and will be removed in a future version.
Use .items instead.
Iterates over the DataFrame columns, returning a tuple with
the column name and the content as a Series.
Yields
labelobjectThe column names for the DataFrame being iterated over.
contentSeriesThe column entries belonging to each label, as a Series.
See also
DataFrame.iterRecommended alternative.
DataFrame.iterrowsIterate over DataFrame rows as (index, Series) pairs.
DataFrame.itertuplesIterate over DataFrame rows as namedtuples of the values.
| reference/api/pandas.DataFrame.iteritems.html |
pandas.tseries.offsets.QuarterEnd.apply_index | `pandas.tseries.offsets.QuarterEnd.apply_index`
Vectorized apply of DateOffset to DatetimeIndex. | QuarterEnd.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.QuarterEnd.apply_index.html |
pandas.api.types.is_interval_dtype | `pandas.api.types.is_interval_dtype`
Check whether an array-like or dtype is of the Interval dtype.
```
>>> is_interval_dtype(object)
False
>>> is_interval_dtype(IntervalDtype())
True
>>> is_interval_dtype([1, 2, 3])
False
>>>
>>> interval = pd.Interval(1, 2, closed="right")
>>> is_interval_dtype(interval)
False
>>> is_interval_dtype(pd.IntervalIndex([interval]))
True
``` | pandas.api.types.is_interval_dtype(arr_or_dtype)[source]#
Check whether an array-like or dtype is of the Interval dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array-like or dtype to check.
Returns
booleanWhether or not the array-like or dtype is of the Interval dtype.
Examples
>>> is_interval_dtype(object)
False
>>> is_interval_dtype(IntervalDtype())
True
>>> is_interval_dtype([1, 2, 3])
False
>>>
>>> interval = pd.Interval(1, 2, closed="right")
>>> is_interval_dtype(interval)
False
>>> is_interval_dtype(pd.IntervalIndex([interval]))
True
| reference/api/pandas.api.types.is_interval_dtype.html |
pandas.isnull | `pandas.isnull`
Detect missing values for an array-like object.
```
>>> pd.isna('dog')
False
``` | pandas.isnull(obj)[source]#
Detect missing values for an array-like object.
This function takes a scalar or array-like object and indicates
whether values are missing (NaN in numeric arrays, None or NaN
in object arrays, NaT in datetimelike).
Parameters
objscalar or array-likeObject to check for null or missing values.
Returns
bool or array-like of boolFor scalar input, returns a scalar boolean.
For array input, returns an array of boolean indicating whether each
corresponding element is missing.
See also
notnaBoolean inverse of pandas.isna.
Series.isnaDetect missing values in a Series.
DataFrame.isnaDetect missing values in a DataFrame.
Index.isnaDetect missing values in an Index.
Examples
Scalar arguments (including strings) result in a scalar boolean.
>>> pd.isna('dog')
False
>>> pd.isna(pd.NA)
True
>>> pd.isna(np.nan)
True
ndarrays result in an ndarray of booleans.
>>> array = np.array([[1, np.nan, 3], [4, 5, np.nan]])
>>> array
array([[ 1., nan, 3.],
[ 4., 5., nan]])
>>> pd.isna(array)
array([[False, True, False],
[False, False, True]])
For indexes, an ndarray of booleans is returned.
>>> index = pd.DatetimeIndex(["2017-07-05", "2017-07-06", None,
... "2017-07-08"])
>>> index
DatetimeIndex(['2017-07-05', '2017-07-06', 'NaT', '2017-07-08'],
dtype='datetime64[ns]', freq=None)
>>> pd.isna(index)
array([False, False, True, False])
For Series and DataFrame, the same type is returned, containing booleans.
>>> df = pd.DataFrame([['ant', 'bee', 'cat'], ['dog', None, 'fly']])
>>> df
0 1 2
0 ant bee cat
1 dog None fly
>>> pd.isna(df)
0 1 2
0 False False False
1 False True False
>>> pd.isna(df[1])
0 False
1 True
Name: 1, dtype: bool
| reference/api/pandas.isnull.html |
pandas.DataFrame.add | `pandas.DataFrame.add`
Get Addition of dataframe and other, element-wise (binary operator add).
Equivalent to dataframe + other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, radd.
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
``` | DataFrame.add(other, axis='columns', level=None, fill_value=None)[source]#
Get Addition of dataframe and other, element-wise (binary operator add).
Equivalent to dataframe + other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, radd.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
| reference/api/pandas.DataFrame.add.html |
pandas.tseries.offsets.Minute.apply | pandas.tseries.offsets.Minute.apply | Minute.apply()#
| reference/api/pandas.tseries.offsets.Minute.apply.html |
Window | Window | Rolling objects are returned by .rolling calls: pandas.DataFrame.rolling(), pandas.Series.rolling(), etc.
Expanding objects are returned by .expanding calls: pandas.DataFrame.expanding(), pandas.Series.expanding(), etc.
ExponentialMovingWindow objects are returned by .ewm calls: pandas.DataFrame.ewm(), pandas.Series.ewm(), etc.
Rolling window functions#
Rolling.count([numeric_only])
Calculate the rolling count of non NaN observations.
Rolling.sum([numeric_only, engine, ...])
Calculate the rolling sum.
Rolling.mean([numeric_only, engine, ...])
Calculate the rolling mean.
Rolling.median([numeric_only, engine, ...])
Calculate the rolling median.
Rolling.var([ddof, numeric_only, engine, ...])
Calculate the rolling variance.
Rolling.std([ddof, numeric_only, engine, ...])
Calculate the rolling standard deviation.
Rolling.min([numeric_only, engine, ...])
Calculate the rolling minimum.
Rolling.max([numeric_only, engine, ...])
Calculate the rolling maximum.
Rolling.corr([other, pairwise, ddof, ...])
Calculate the rolling correlation.
Rolling.cov([other, pairwise, ddof, ...])
Calculate the rolling sample covariance.
Rolling.skew([numeric_only])
Calculate the rolling unbiased skewness.
Rolling.kurt([numeric_only])
Calculate the rolling Fisher's definition of kurtosis without bias.
Rolling.apply(func[, raw, engine, ...])
Calculate the rolling custom aggregation function.
Rolling.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Rolling.quantile(quantile[, interpolation, ...])
Calculate the rolling quantile.
Rolling.sem([ddof, numeric_only])
Calculate the rolling standard error of mean.
Rolling.rank([method, ascending, pct, ...])
Calculate the rolling rank.
Weighted window functions#
Window.mean([numeric_only])
Calculate the rolling weighted window mean.
Window.sum([numeric_only])
Calculate the rolling weighted window sum.
Window.var([ddof, numeric_only])
Calculate the rolling weighted window variance.
Window.std([ddof, numeric_only])
Calculate the rolling weighted window standard deviation.
Expanding window functions#
Expanding.count([numeric_only])
Calculate the expanding count of non NaN observations.
Expanding.sum([numeric_only, engine, ...])
Calculate the expanding sum.
Expanding.mean([numeric_only, engine, ...])
Calculate the expanding mean.
Expanding.median([numeric_only, engine, ...])
Calculate the expanding median.
Expanding.var([ddof, numeric_only, engine, ...])
Calculate the expanding variance.
Expanding.std([ddof, numeric_only, engine, ...])
Calculate the expanding standard deviation.
Expanding.min([numeric_only, engine, ...])
Calculate the expanding minimum.
Expanding.max([numeric_only, engine, ...])
Calculate the expanding maximum.
Expanding.corr([other, pairwise, ddof, ...])
Calculate the expanding correlation.
Expanding.cov([other, pairwise, ddof, ...])
Calculate the expanding sample covariance.
Expanding.skew([numeric_only])
Calculate the expanding unbiased skewness.
Expanding.kurt([numeric_only])
Calculate the expanding Fisher's definition of kurtosis without bias.
Expanding.apply(func[, raw, engine, ...])
Calculate the expanding custom aggregation function.
Expanding.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Expanding.quantile(quantile[, ...])
Calculate the expanding quantile.
Expanding.sem([ddof, numeric_only])
Calculate the expanding standard error of mean.
Expanding.rank([method, ascending, pct, ...])
Calculate the expanding rank.
Exponentially-weighted window functions#
ExponentialMovingWindow.mean([numeric_only, ...])
Calculate the ewm (exponential weighted moment) mean.
ExponentialMovingWindow.sum([numeric_only, ...])
Calculate the ewm (exponential weighted moment) sum.
ExponentialMovingWindow.std([bias, numeric_only])
Calculate the ewm (exponential weighted moment) standard deviation.
ExponentialMovingWindow.var([bias, numeric_only])
Calculate the ewm (exponential weighted moment) variance.
ExponentialMovingWindow.corr([other, ...])
Calculate the ewm (exponential weighted moment) sample correlation.
ExponentialMovingWindow.cov([other, ...])
Calculate the ewm (exponential weighted moment) sample covariance.
Window indexer#
Base class for defining custom window boundaries.
api.indexers.BaseIndexer([index_array, ...])
Base class for window bounds calculations.
api.indexers.FixedForwardWindowIndexer([...])
Creates window boundaries for fixed-length windows that include the current row.
api.indexers.VariableOffsetWindowIndexer([...])
Calculate window boundaries based on a non-fixed offset such as a BusinessDay.
| reference/window.html |
pandas.io.formats.style.Styler.template_html_table | pandas.io.formats.style.Styler.template_html_table | Styler.template_html_table = <Template 'html_table.tpl'>#
| reference/api/pandas.io.formats.style.Styler.template_html_table.html |
pandas.tseries.offsets.SemiMonthEnd.is_year_start | `pandas.tseries.offsets.SemiMonthEnd.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
``` | SemiMonthEnd.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
| reference/api/pandas.tseries.offsets.SemiMonthEnd.is_year_start.html |
pandas.DataFrame.convert_dtypes | `pandas.DataFrame.convert_dtypes`
Convert columns to best possible dtypes using dtypes supporting pd.NA.
```
>>> df = pd.DataFrame(
... {
... "a": pd.Series([1, 2, 3], dtype=np.dtype("int32")),
... "b": pd.Series(["x", "y", "z"], dtype=np.dtype("O")),
... "c": pd.Series([True, False, np.nan], dtype=np.dtype("O")),
... "d": pd.Series(["h", "i", np.nan], dtype=np.dtype("O")),
... "e": pd.Series([10, np.nan, 20], dtype=np.dtype("float")),
... "f": pd.Series([np.nan, 100.5, 200], dtype=np.dtype("float")),
... }
... )
``` | DataFrame.convert_dtypes(infer_objects=True, convert_string=True, convert_integer=True, convert_boolean=True, convert_floating=True)[source]#
Convert columns to best possible dtypes using dtypes supporting pd.NA.
New in version 1.0.0.
Parameters
infer_objectsbool, default TrueWhether object dtypes should be converted to the best possible types.
convert_stringbool, default TrueWhether object dtypes should be converted to StringDtype().
convert_integerbool, default TrueWhether, if possible, conversion can be done to integer extension types.
convert_booleanbool, defaults TrueWhether object dtypes should be converted to BooleanDtypes().
convert_floatingbool, defaults TrueWhether, if possible, conversion can be done to floating extension types.
If convert_integer is also True, preference will be give to integer
dtypes if the floats can be faithfully casted to integers.
New in version 1.2.0.
Returns
Series or DataFrameCopy of input object with new dtype.
See also
infer_objectsInfer dtypes of objects.
to_datetimeConvert argument to datetime.
to_timedeltaConvert argument to timedelta.
to_numericConvert argument to a numeric type.
Notes
By default, convert_dtypes will attempt to convert a Series (or each
Series in a DataFrame) to dtypes that support pd.NA. By using the options
convert_string, convert_integer, convert_boolean and
convert_boolean, it is possible to turn off individual conversions
to StringDtype, the integer extension types, BooleanDtype
or floating extension types, respectively.
For object-dtyped columns, if infer_objects is True, use the inference
rules as during normal Series/DataFrame construction. Then, if possible,
convert to StringDtype, BooleanDtype or an appropriate integer
or floating extension type, otherwise leave as object.
If the dtype is integer, convert to an appropriate integer extension type.
If the dtype is numeric, and consists of all integers, convert to an
appropriate integer extension type. Otherwise, convert to an
appropriate floating extension type.
Changed in version 1.2: Starting with pandas 1.2, this method also converts float columns
to the nullable floating extension type.
In the future, as new dtypes are added that support pd.NA, the results
of this method will change to support those new dtypes.
Examples
>>> df = pd.DataFrame(
... {
... "a": pd.Series([1, 2, 3], dtype=np.dtype("int32")),
... "b": pd.Series(["x", "y", "z"], dtype=np.dtype("O")),
... "c": pd.Series([True, False, np.nan], dtype=np.dtype("O")),
... "d": pd.Series(["h", "i", np.nan], dtype=np.dtype("O")),
... "e": pd.Series([10, np.nan, 20], dtype=np.dtype("float")),
... "f": pd.Series([np.nan, 100.5, 200], dtype=np.dtype("float")),
... }
... )
Start with a DataFrame with default dtypes.
>>> df
a b c d e f
0 1 x True h 10.0 NaN
1 2 y False i NaN 100.5
2 3 z NaN NaN 20.0 200.0
>>> df.dtypes
a int32
b object
c object
d object
e float64
f float64
dtype: object
Convert the DataFrame to use best possible dtypes.
>>> dfn = df.convert_dtypes()
>>> dfn
a b c d e f
0 1 x True h 10 <NA>
1 2 y False i <NA> 100.5
2 3 z <NA> <NA> 20 200.0
>>> dfn.dtypes
a Int32
b string
c boolean
d string
e Int64
f Float64
dtype: object
Start with a Series of strings and missing data represented by np.nan.
>>> s = pd.Series(["a", "b", np.nan])
>>> s
0 a
1 b
2 NaN
dtype: object
Obtain a Series with dtype StringDtype.
>>> s.convert_dtypes()
0 a
1 b
2 <NA>
dtype: string
| reference/api/pandas.DataFrame.convert_dtypes.html |