pyspark.pandas.Index.is_monotonic¶
-
property
Index.
is_monotonic
¶ Return boolean if values in the object are monotonically increasing.
Note
the current implementation of is_monotonic requires to shuffle and aggregate multiple times to check the order locally and globally, which is potentially expensive. In case of multi-index, all data are transferred to single node which can easily cause out-of-memory error currently.
Note
Disable the Spark config spark.sql.optimizer.nestedSchemaPruning.enabled for multi-index if you’re using pandas-on-Spark < 1.7.0 with PySpark 3.1.1.
- Returns
- is_monotonicbool
Examples
>>> ser = ps.Series(['1/1/2018', '3/1/2018', '4/1/2018']) >>> ser.is_monotonic True
>>> df = ps.DataFrame({'dates': [None, '1/1/2018', '2/1/2018', '3/1/2018']}) >>> df.dates.is_monotonic False
>>> df.index.is_monotonic True
>>> ser = ps.Series([1]) >>> ser.is_monotonic True
>>> ser = ps.Series([]) >>> ser.is_monotonic True
>>> ser.rename("a").to_frame().set_index("a").index.is_monotonic True
>>> ser = ps.Series([5, 4, 3, 2, 1], index=[1, 2, 3, 4, 5]) >>> ser.is_monotonic False
>>> ser.index.is_monotonic True
Support for MultiIndex
>>> midx = ps.MultiIndex.from_tuples( ... [('x', 'a'), ('x', 'b'), ('y', 'c'), ('y', 'd'), ('z', 'e')]) >>> midx MultiIndex([('x', 'a'), ('x', 'b'), ('y', 'c'), ('y', 'd'), ('z', 'e')], ) >>> midx.is_monotonic True
>>> midx = ps.MultiIndex.from_tuples( ... [('z', 'a'), ('z', 'b'), ('y', 'c'), ('y', 'd'), ('x', 'e')]) >>> midx MultiIndex([('z', 'a'), ('z', 'b'), ('y', 'c'), ('y', 'd'), ('x', 'e')], ) >>> midx.is_monotonic False