Skip to content

Commit 6537beb

Browse files
committed
Merge remote-tracking branch 'upstream' into fix/issue-1918-bar-plot-dateformatter
2 parents 39699eb + 9f66b81 commit 6537beb

File tree

29 files changed

+148
-236
lines changed

29 files changed

+148
-236
lines changed

.github/workflows/unit-tests.yml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -320,8 +320,7 @@ jobs:
320320
strategy:
321321
fail-fast: false
322322
matrix:
323-
# Separate out macOS 13 and 14, since macOS 14 is arm64 only
324-
os: [ubuntu-24.04, macOS-13, macOS-14, windows-2025]
323+
os: [ubuntu-24.04, macos-15-intel, macos-15, windows-2025]
325324

326325
timeout-minutes: 90
327326

doc/source/user_guide/io.rst

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -303,8 +303,6 @@ compression : {``'infer'``, ``'gzip'``, ``'bz2'``, ``'zip'``, ``'xz'``, ``'zstd'
303303
As an example, the following could be passed for faster compression and to
304304
create a reproducible gzip archive:
305305
``compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}``.
306-
307-
.. versionchanged:: 1.2.0 Previous versions forwarded dict entries for 'gzip' to ``gzip.open``.
308306
thousands : str, default ``None``
309307
Thousands separator.
310308
decimal : str, default ``'.'``
@@ -1472,7 +1470,7 @@ rather than reading the entire file into memory, such as the following:
14721470
table
14731471
14741472
1475-
By specifying a ``chunksize`` to ``read_csv``, the return
1473+
By specifying a ``chunksize`` to :func:`read_csv` as a context manager, the return
14761474
value will be an iterable object of type ``TextFileReader``:
14771475

14781476
.. ipython:: python
@@ -1482,10 +1480,6 @@ value will be an iterable object of type ``TextFileReader``:
14821480
for chunk in reader:
14831481
print(chunk)
14841482
1485-
.. versionchanged:: 1.2
1486-
1487-
``read_csv/json/sas`` return a context-manager when iterating through a file.
1488-
14891483
Specifying ``iterator=True`` will also return the ``TextFileReader`` object:
14901484

14911485
.. ipython:: python

doc/source/user_guide/visualization.rst

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -326,8 +326,6 @@ The ``by`` keyword can be specified to plot grouped histograms:
326326
327327
In addition, the ``by`` keyword can also be specified in :meth:`DataFrame.plot.hist`.
328328

329-
.. versionchanged:: 1.4.0
330-
331329
.. ipython:: python
332330
333331
data = pd.DataFrame(
@@ -480,8 +478,6 @@ columns:
480478
481479
You could also create groupings with :meth:`DataFrame.plot.box`, for instance:
482480

483-
.. versionchanged:: 1.4.0
484-
485481
.. ipython:: python
486482
:suppress:
487483

doc/source/whatsnew/v3.0.0.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -741,6 +741,7 @@ Other Deprecations
741741
- Deprecated allowing strings representing full dates in :meth:`DataFrame.at_time` and :meth:`Series.at_time` (:issue:`50839`)
742742
- Deprecated backward-compatibility behavior for :meth:`DataFrame.select_dtypes` matching "str" dtype when ``np.object_`` is specified (:issue:`61916`)
743743
- Deprecated option "future.no_silent_downcasting", as it is no longer used. In a future version accessing this option will raise (:issue:`59502`)
744+
- Deprecated passing non-Index types to :meth:`Index.join`; explicitly convert to Index first (:issue:`62897`)
744745
- Deprecated silent casting of non-datetime 'other' to datetime in :meth:`Series.combine_first` (:issue:`62931`)
745746
- Deprecated slicing on a :class:`Series` or :class:`DataFrame` with a :class:`DatetimeIndex` using a ``datetime.date`` object, explicitly cast to :class:`Timestamp` instead (:issue:`35830`)
746747
- Deprecated support for the Dataframe Interchange Protocol (:issue:`56732`)
@@ -1017,6 +1018,7 @@ Timedelta
10171018
- Bug in :class:`Timedelta` constructor failing to raise when passed an invalid keyword (:issue:`53801`)
10181019
- Bug in :meth:`DataFrame.cumsum` which was raising ``IndexError`` if dtype is ``timedelta64[ns]`` (:issue:`57956`)
10191020
- Bug in multiplication operations with ``timedelta64`` dtype failing to raise ``TypeError`` when multiplying by ``bool`` objects or dtypes (:issue:`58054`)
1021+
- Bug in multiplication operations with ``timedelta64`` dtype incorrectly raising when multiplying by numpy-nullable dtypes or pyarrow integer dtypes (:issue:`58054`)
10201022

10211023
Timezones
10221024
^^^^^^^^^
@@ -1200,6 +1202,7 @@ Reshaping
12001202
- Bug in :meth:`DataFrame.combine` with non-unique columns incorrectly raising (:issue:`51340`)
12011203
- Bug in :meth:`DataFrame.explode` producing incorrect result for :class:`pyarrow.large_list` type (:issue:`61091`)
12021204
- Bug in :meth:`DataFrame.join` inconsistently setting result index name (:issue:`55815`)
1205+
- Bug in :meth:`DataFrame.join` not producing the correct row order when joining with a list of Series/DataFrames (:issue:`62954`)
12031206
- Bug in :meth:`DataFrame.join` when a :class:`DataFrame` with a :class:`MultiIndex` would raise an ``AssertionError`` when :attr:`MultiIndex.names` contained ``None``. (:issue:`58721`)
12041207
- Bug in :meth:`DataFrame.merge` where merging on a column containing only ``NaN`` values resulted in an out-of-bounds array access (:issue:`59421`)
12051208
- Bug in :meth:`Series.combine_first` incorrectly replacing ``None`` entries with ``NaN`` (:issue:`58977`)

pandas/core/arrays/categorical.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2545,10 +2545,6 @@ def unique(self) -> Self:
25452545
Return the ``Categorical`` which ``categories`` and ``codes`` are
25462546
unique.
25472547
2548-
.. versionchanged:: 1.3.0
2549-
2550-
Previously, unused categories were dropped from the new categories.
2551-
25522548
Returns
25532549
-------
25542550
Categorical

pandas/core/arrays/masked.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1686,8 +1686,6 @@ def any(
16861686
missing values are present, similar :ref:`Kleene logic <boolean.kleene>`
16871687
is used as for logical operations.
16881688
1689-
.. versionchanged:: 1.4.0
1690-
16911689
Parameters
16921690
----------
16931691
skipna : bool, default True
@@ -1774,8 +1772,6 @@ def all(
17741772
missing values are present, similar :ref:`Kleene logic <boolean.kleene>`
17751773
is used as for logical operations.
17761774
1777-
.. versionchanged:: 1.4.0
1778-
17791775
Parameters
17801776
----------
17811777
skipna : bool, default True

pandas/core/arrays/numeric.py

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -139,10 +139,9 @@ def _safe_cast(cls, values: np.ndarray, dtype: np.dtype, copy: bool) -> np.ndarr
139139
raise AbstractMethodError(cls)
140140

141141

142-
def _coerce_to_data_and_mask(
143-
values, dtype, copy: bool, dtype_cls: type[NumericDtype], default_dtype: np.dtype
144-
):
142+
def _coerce_to_data_and_mask(values, dtype, copy: bool, dtype_cls: type[NumericDtype]):
145143
checker = dtype_cls._checker
144+
default_dtype = dtype_cls._default_np_dtype
146145

147146
mask = None
148147
inferred_type = None
@@ -163,7 +162,7 @@ def _coerce_to_data_and_mask(
163162
if copy:
164163
values = values.copy()
165164
mask = mask.copy()
166-
return values, mask, dtype, inferred_type
165+
return values, mask
167166

168167
original = values
169168
if not copy:
@@ -174,6 +173,7 @@ def _coerce_to_data_and_mask(
174173
if values.dtype == object or is_string_dtype(values.dtype):
175174
inferred_type = lib.infer_dtype(values, skipna=True)
176175
if inferred_type == "boolean" and dtype is None:
176+
# object dtype array of bools
177177
name = dtype_cls.__name__.strip("_")
178178
raise TypeError(f"{values.dtype} cannot be converted to {name}")
179179

@@ -252,7 +252,7 @@ def _coerce_to_data_and_mask(
252252
values = values.astype(dtype, copy=copy)
253253
else:
254254
values = dtype_cls._safe_cast(values, dtype, copy=False)
255-
return values, mask, dtype, inferred_type
255+
return values, mask
256256

257257

258258
class NumericArray(BaseMaskedArray):
@@ -296,10 +296,7 @@ def _coerce_to_array(
296296
cls, value, *, dtype: DtypeObj, copy: bool = False
297297
) -> tuple[np.ndarray, np.ndarray]:
298298
dtype_cls = cls._dtype_cls
299-
default_dtype = dtype_cls._default_np_dtype
300-
values, mask, _, _ = _coerce_to_data_and_mask(
301-
value, dtype, copy, dtype_cls, default_dtype
302-
)
299+
values, mask = _coerce_to_data_and_mask(value, dtype, copy, dtype_cls)
303300
return values, mask
304301

305302
@classmethod

pandas/core/arrays/string_.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -573,9 +573,7 @@ class StringArray(BaseStringArray, NumpyExtensionArray): # type: ignore[misc]
573573
:meth:`pandas.array` with ``dtype="string"`` for a stable way of
574574
creating a `StringArray` from any sequence.
575575
576-
.. versionchanged:: 1.5.0
577-
578-
StringArray now accepts array-likes containing
576+
StringArray accepts array-likes containing
579577
nan-likes(``None``, ``np.nan``) for the ``values`` parameter
580578
in addition to strings and :attr:`pandas.NA`
581579

pandas/core/arrays/timedeltas.py

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,11 @@
5151
is_string_dtype,
5252
pandas_dtype,
5353
)
54-
from pandas.core.dtypes.dtypes import ExtensionDtype
54+
from pandas.core.dtypes.dtypes import (
55+
ArrowDtype,
56+
BaseMaskedDtype,
57+
ExtensionDtype,
58+
)
5559
from pandas.core.dtypes.missing import isna
5660

5761
from pandas.core import (
@@ -501,6 +505,10 @@ def __mul__(self, other) -> Self:
501505
f"Cannot multiply '{self.dtype}' by bool, explicitly cast to "
502506
"integers instead"
503507
)
508+
if isinstance(other.dtype, (ArrowDtype, BaseMaskedDtype)):
509+
# GH#58054
510+
return NotImplemented
511+
504512
if len(other) != len(self) and not lib.is_np_dtype(other.dtype, "m"):
505513
# Exclude timedelta64 here so we correctly raise TypeError
506514
# for that instead of ValueError

pandas/core/frame.py

Lines changed: 9 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -554,8 +554,6 @@ class DataFrame(NDFrame, OpsMixin):
554554
If data is a dict containing one or more Series (possibly of different dtypes),
555555
``copy=False`` will ensure that these inputs are not copied.
556556
557-
.. versionchanged:: 1.3.0
558-
559557
See Also
560558
--------
561559
DataFrame.from_records : Constructor from tuples, also record arrays.
@@ -2686,17 +2684,13 @@ def to_stata(
26862684
8 characters and values are repeated.
26872685
{compression_options}
26882686
2689-
.. versionchanged:: 1.4.0 Zstandard support.
2690-
26912687
{storage_options}
26922688
26932689
value_labels : dict of dicts
26942690
Dictionary containing columns as keys and dictionaries of column value
26952691
to labels as values. Labels for a single variable must be 32,000
26962692
characters or smaller.
26972693
2698-
.. versionadded:: 1.4.0
2699-
27002694
Raises
27012695
------
27022696
NotImplementedError
@@ -3534,8 +3528,6 @@ def to_xml(
35343528
scripts and not later versions is currently supported.
35353529
{compression_options}
35363530
3537-
.. versionchanged:: 1.4.0 Zstandard support.
3538-
35393531
{storage_options}
35403532
35413533
Returns
@@ -9487,13 +9479,6 @@ def groupby(
94879479
when the result's index (and column) labels match the inputs, and
94889480
are included otherwise.
94899481
9490-
.. versionchanged:: 1.5.0
9491-
9492-
Warns that ``group_keys`` will no longer be ignored when the
9493-
result from ``apply`` is a like-indexed Series or DataFrame.
9494-
Specify ``group_keys`` explicitly to include the group keys or
9495-
not.
9496-
94979482
.. versionchanged:: 2.0.0
94989483
94999484
``group_keys`` now defaults to ``True``.
@@ -11408,12 +11393,18 @@ def join(
1140811393

1140911394
# join indexes only using concat
1141011395
if can_concat:
11411-
if how == "left":
11396+
if how == "left" or how == "right":
1141211397
res = concat(
1141311398
frames, axis=1, join="outer", verify_integrity=True, sort=sort
1141411399
)
11415-
return res.reindex(self.index)
11400+
index = self.index if how == "left" else frames[-1].index
11401+
if sort:
11402+
index = index.sort_values()
11403+
result = res.reindex(index)
11404+
return result
1141611405
else:
11406+
if how == "outer":
11407+
sort = True
1141711408
return concat(
1141811409
frames, axis=1, join=how, verify_integrity=True, sort=sort
1141911410
)
@@ -11424,6 +11415,7 @@ def join(
1142411415
joined = merge(
1142511416
joined,
1142611417
frame,
11418+
sort=sort,
1142711419
how=how,
1142811420
left_index=True,
1142911421
right_index=True,

0 commit comments

Comments
 (0)