lakehouse_parquet_read_as_spark
Read a Parquet file from a Fabric lakehouse Files path.
This reads from the lakehouse Files/ area using Spark. If Spark cannot
read the original Parquet file because of timestamp precision issues, the
helper tries a fallback _tsus path. If that fallback file does not exist,
it converts the single local Parquet file from nanosecond to microsecond
timestamps and retries the fallback path.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
lh
|
Housepath
|
Lakehouse path object returned by |
required |
relative_path
|
str
|
Path to the Parquet file under the lakehouse |
required |
verbose
|
bool
|
Whether to print read and fallback progress. |
True
|
spark_session
|
object
|
Spark session to use. If omitted, the helper uses the notebook global
|
None
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
Spark DataFrame loaded from the original or converted Parquet path. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
RuntimeError
|
If neither the original path nor the converted fallback path can be read successfully. |
Examples:
>>> lh_source = get_path("Sandbox", "Source", config=CONFIG)
>>> df = lakehouse_parquet_read_as_spark(
... lh_source,
... "raw/orders/orders_2026.parquet",
... )
Notes
-----
Assumes Fabric notebook runtime filesystem conventions for local fallback
conversion paths (``/lakehouse/default/Files/...``).
Source code in src/fabricops_kit/fabric_io.py
526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 | |