You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Standard Project Template for Databricks Labs Projects
1
+
# Delta Sharing Java Connector
2
+
A java connector for [delta-sharing](https://delta.io/sharing/) that allows you to easily ingest data on any JVM.
3
+
3
4
4
5
## Project Description
5
-
Short description of project's purpose
6
+
This project brings delta-sharing capabilities to java.
7
+
The Java connector follows the Delta Sharing protocol to read shared tables from a Delta Sharing Server. To further reduce and limit egress costs on the Data Provider side, we implemented a persistent cache to reduce and limit the egress costs on the Data Provider side by removing any unnecessary reads.
8
+
9
+
- The data is served to the connector via persisted cache to limit the egress costs whenever possible.
10
+
- Instead of keeping all table data in memory, we will use file stream readers to serve larger datasets even when there isn't enough memory available.
11
+
- Each table will have a dedicated file stream reader per part file that is held in the persistent cache. File stream readers allow us to read the data in blocks of records and we can process data with more flexibility.
12
+
- Data records are provided as a set of Avro GenericRecords that provide a good balance between the flexibility of representation and integrational capabilities. GenericRecords can easily be exported to JSON and/or other formats using EncoderFactory in Avro.
13
+
14
+
- Every time the data access is requested the connector will check for the metadata updates and refresh the table data in case of any metadata changes.
15
+
- The connector requests the metadata for the table based on its coordinate from the provider. The table coordinate is the profile file path following with `#` and the fully qualified name of a table (<share-name>.<schema-name>.<table-name>)
16
+
- A lookup of table to metadata is maintained inside the JVM. The connector then compares the received metadata with the last metadata snapshot. If there is no change, then the existing table data is served from cache. Otherwise, the connector will refresh the table data in the cache.
17
+
18
+
- When the metadata changes are detected both the data and the metadata will be updated.
19
+
- The connector will request the pre-signed urls for the table defined by the fully qualified table name. The connector will only download the file whose metadata has changed and will store these files into the persisted cache location.
20
+
21
+
22
+
In the current implementation, the persistent cache is located in dedicated temporary locations that are destroyed when the JVM is shutdown. This is an important consideration since it avoids persisting orphaned data locally.
23
+
6
24
7
25
## Project Support
8
26
Please note that all projects in the /databrickslabs github account are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.
@@ -11,13 +29,11 @@ Any issues discovered through the use of this project should be filed as GitHub
11
29
12
30
13
31
## Building the Project
14
-
Instructions for how to build the project
15
-
16
-
## Deploying / Installing the Project
17
-
Instructions for how to deploy the project, or install it
18
-
19
-
## Releasing the Project
20
-
Instructions for how to release a version of the project
32
+
The project is implemented on top of maven.
33
+
To build the project locally:
34
+
- Make sure you are in the root directory of the project
35
+
- Run `mvn clean install`
36
+
- The jars will be available in /target directory
21
37
22
38
## Using the Project
23
-
Simple examples on how to use the project
39
+
To use the connector in your projects use maven coordinates with desired version.
.. delta-sharing-java-connector documentation master file, created by
2
+
sphinx-quickstart on Wed Feb 2 11:01:42 2022.
3
+
You can adapt this file completely to your liking, but it should at least
4
+
contain the root `toctree` directive.
5
+
6
+
.. image:: images/delta-sharing-java-logo.png
7
+
:width:10%
8
+
:alt:delta-sharing-java-connector
9
+
:align:left
10
+
11
+
delta-sharing-java-connector is an extension to the `delta sharing <https://delta.io/sharing/>`__ that allows easy and simple ingestion of delta sharing tables in java applications.
12
+
13
+
.. image:: images/high-level-design.png
14
+
:width:100%
15
+
:alt:High level design of the connector
16
+
:align:center
17
+
18
+
delta-sharing-java-connector provides:
19
+
- simple to use APIs;
20
+
- local caching of remote files to limit egress/ingress costs;
21
+
- readers based on a round robin streams to limit runtime memory requirements;
22
+
23
+
Since this is a java connector, all APIs can be used from either java or scala.
24
+
The intended usage of the connector is to provide connectivity to remote data hosted on delta sharing
25
+
from JVM based applications that may run only on a single node.
26
+
27
+
Note: the code does depend on spark binaries but does not require spark service running at serving time.
28
+
29
+
30
+
Documentation
31
+
=============
32
+
33
+
.. toctree::
34
+
:maxdepth:1
35
+
:caption:Contents:
36
+
37
+
provider/json
38
+
usage/DeltaSharing
39
+
usage/TableReader
40
+
41
+
42
+
Indices and tables
43
+
==================
44
+
45
+
* :ref:`genindex`
46
+
* :ref:`search`
47
+
48
+
49
+
.. * :ref:`modindex`
50
+
51
+
52
+
Project Support
53
+
===============
54
+
55
+
Please note that all projects in the ``databrickslabs`` github space are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.
56
+
57
+
Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.
0 commit comments