Skip to content

Commit 0491b68

Browse files
authored
Merge pull request #7 from databrickslabs/feature/add-docs
Feature/add docs
2 parents 62126b5 + 3f5c3ba commit 0491b68

13 files changed

Lines changed: 360 additions & 12 deletions

File tree

‎.github/workflows/docs.yml‎

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
name: docs
2+
on:
3+
push:
4+
branches:
5+
- main
6+
jobs:
7+
build:
8+
runs-on: ubuntu-latest
9+
steps:
10+
- name: install pandoc
11+
run: sudo apt-get install pandoc
12+
- uses: actions/setup-python@v2
13+
- uses: actions/checkout@v2
14+
with:
15+
ref: main
16+
fetch-depth: 0 # otherwise, you will failed to push refs to dest repo
17+
- name: Build and Commit
18+
uses: sphinx-notes/pages@v2
19+
with:
20+
documentation_path: docs/source
21+
requirements_path: docs/docs-requirements.txt
22+
- name: Push changes
23+
uses: ad-m/github-push-action@master
24+
with:
25+
github_token: ${{ secrets.GITHUB_TOKEN }}
26+
branch: gh-pages

‎.gitignore‎

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -200,4 +200,7 @@ Icon
200200
.AppleDesktop
201201
Network Trash Folder
202202
Temporary Items
203-
.apdisk
203+
.apdisk
204+
205+
# Ignore docs/_build
206+
docs/_build

‎README.md‎

Lines changed: 27 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,26 @@
1-
# PROJECT NAME
2-
Standard Project Template for Databricks Labs Projects
1+
# Delta Sharing Java Connector
2+
A java connector for [delta-sharing](https://delta.io/sharing/) that allows you to easily ingest data on any JVM.
3+
34

45
## Project Description
5-
Short description of project's purpose
6+
This project brings delta-sharing capabilities to java.
7+
The Java connector follows the Delta Sharing protocol to read shared tables from a Delta Sharing Server. To further reduce and limit egress costs on the Data Provider side, we implemented a persistent cache to reduce and limit the egress costs on the Data Provider side by removing any unnecessary reads.
8+
9+
- The data is served to the connector via persisted cache to limit the egress costs whenever possible.
10+
- Instead of keeping all table data in memory, we will use file stream readers to serve larger datasets even when there isn't enough memory available.
11+
- Each table will have a dedicated file stream reader per part file that is held in the persistent cache. File stream readers allow us to read the data in blocks of records and we can process data with more flexibility.
12+
- Data records are provided as a set of Avro GenericRecords that provide a good balance between the flexibility of representation and integrational capabilities. GenericRecords can easily be exported to JSON and/or other formats using EncoderFactory in Avro.
13+
14+
- Every time the data access is requested the connector will check for the metadata updates and refresh the table data in case of any metadata changes.
15+
- The connector requests the metadata for the table based on its coordinate from the provider. The table coordinate is the profile file path following with `#` and the fully qualified name of a table (<share-name>.<schema-name>.<table-name>)
16+
- A lookup of table to metadata is maintained inside the JVM. The connector then compares the received metadata with the last metadata snapshot. If there is no change, then the existing table data is served from cache. Otherwise, the connector will refresh the table data in the cache.
17+
18+
- When the metadata changes are detected both the data and the metadata will be updated.
19+
- The connector will request the pre-signed urls for the table defined by the fully qualified table name. The connector will only download the file whose metadata has changed and will store these files into the persisted cache location.
20+
21+
22+
In the current implementation, the persistent cache is located in dedicated temporary locations that are destroyed when the JVM is shutdown. This is an important consideration since it avoids persisting orphaned data locally.
23+
624

725
## Project Support
826
Please note that all projects in the /databrickslabs github account are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.
@@ -11,13 +29,11 @@ Any issues discovered through the use of this project should be filed as GitHub
1129

1230

1331
## Building the Project
14-
Instructions for how to build the project
15-
16-
## Deploying / Installing the Project
17-
Instructions for how to deploy the project, or install it
18-
19-
## Releasing the Project
20-
Instructions for how to release a version of the project
32+
The project is implemented on top of maven.
33+
To build the project locally:
34+
- Make sure you are in the root directory of the project
35+
- Run `mvn clean install`
36+
- The jars will be available in /target directory
2137

2238
## Using the Project
23-
Simple examples on how to use the project
39+
To use the connector in your projects use maven coordinates with desired version.

‎codecov.yml‎

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
ignore:
2+
- "vendor/**/*"
3+
- "dist/**/*"
4+
codecov:
5+
token: 8b1ac57c-84af-41c8-b7d2-f00fff24ef4c
6+
coverage:
7+
status:
8+
project: yes
9+
patch: true
10+
changes: true
11+
comment:
12+
layout: "reach, diff, flags, files"
13+
behavior: default
14+
require_changes: false # if true: only post the comment if coverage changes
15+
require_base: true # [true :: must have a base report to post]
16+
require_head: true # [true :: must have a head report to post]
17+
branches: [] # branch names that can post comment

‎docs/Makefile‎

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# Minimal makefile for Sphinx documentation
2+
#
3+
4+
# You can set these variables from the command line, and also
5+
# from the environment for the first two.
6+
SPHINXOPTS ?=
7+
SPHINXBUILD ?= sphinx-build
8+
SOURCEDIR = source
9+
BUILDDIR = _build
10+
11+
# Put it first so that "make" without argument is like "make help".
12+
help:
13+
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
14+
15+
.PHONY: help Makefile
16+
17+
# Catch-all target: route all unknown targets to Sphinx using the new
18+
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
19+
%: Makefile
20+
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
21+

‎docs/docs-requirements.txt‎

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
Sphinx==4.4.0
2+
sphinx-material==0.0.35
3+
nbsphinx==0.8.8
4+
pandoc==2.0.1
5+
ipython==8.0.1
6+
sphinxcontrib-fulltoc==1.2.0
7+
livereload==2.6.3
8+
autodocsumm==0.2.7
9+
sphinx-tabs==3.2.0

‎docs/source/conf.py‎

Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
# Configuration file for the Sphinx documentation builder.
2+
#
3+
# This file only contains a selection of the most common options. For a full
4+
# list see the documentation:
5+
# https://www.sphinx-doc.org/en/master/usage/configuration.html
6+
7+
# -- Path setup --------------------------------------------------------------
8+
9+
# If extensions (or modules to document with autodoc) are in another directory,
10+
# add these directories to sys.path here. If the directory is relative to the
11+
# documentation root, use os.path.abspath to make it absolute, like shown here.
12+
#
13+
import os
14+
import sys
15+
sys.path.insert(0, os.path.abspath('../../python'))
16+
17+
18+
# -- Project information -----------------------------------------------------
19+
20+
project = 'delta-sharing-java-connector'
21+
copyright = '2022, Databricks Inc'
22+
author = 'Milos Colic, Vuong Nguyen'
23+
24+
# The full version, including alpha/beta/rc tags
25+
release = 'v0.1-alpha'
26+
27+
28+
# -- General configuration ---------------------------------------------------
29+
30+
# Add any Sphinx extension module names here, as strings. They can be
31+
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
32+
# ones.
33+
extensions = [
34+
"sphinx_material",
35+
"nbsphinx",
36+
"sphinx_tabs.tabs",
37+
"sphinx.ext.githubpages",
38+
"sphinx.ext.autosectionlabel",
39+
"sphinx.ext.todo"
40+
]
41+
42+
# Add any paths that contain templates here, relative to this directory.
43+
templates_path = ['_templates']
44+
45+
# List of patterns, relative to source directory, that match files and
46+
# directories to ignore when looking for source files.
47+
# This pattern also affects html_static_path and html_extra_path.
48+
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', ".env"]
49+
source_suffix = [".rst", ".md"]
50+
51+
pygments_style = 'sphinx'
52+
nbsphinx_execute = 'never'
53+
napoleon_use_admonition_for_notes = True
54+
sphinx_tabs_disable_tab_closing = True
55+
todo_include_todos = True
56+
57+
# -- Options for HTML output -------------------------------------------------
58+
59+
# The theme to use for HTML and HTML Help pages. See the documentation for
60+
# a list of builtin themes.
61+
#
62+
html_theme = 'sphinx_material'
63+
64+
# Material theme options (see theme.conf for more information)
65+
html_theme_options = {
66+
67+
# Set the name of the project to appear in the navigation.
68+
'nav_title': f'delta-sharing-java-connector {release}',
69+
70+
# Specify a base_url used to generate sitemap.xml. If not
71+
# specified, then no sitemap will be built.
72+
# 'base_url': 'https://project.github.io/project',
73+
74+
# Set the color and the accent color
75+
'color_primary': 'green',
76+
'color_accent': 'green',
77+
78+
# Set the repo location to get a badge with stats
79+
'repo_url': 'https://github.com/databrickslabs/delta-sharing-java-connector/',
80+
'repo_name': 'delta-sharing-java-connector',
81+
82+
'master_doc': False,
83+
84+
'globaltoc_depth': 1,
85+
'globaltoc_collapse': True,
86+
'globaltoc_includehidden': True,
87+
'heroes': {'index': 'Simple and easy to use java connector for delta sharing.'},
88+
"version_dropdown": True,
89+
# "version_json": "../versions-v2.json",
90+
91+
}
92+
html_title = project
93+
html_short_title = project
94+
html_logo = 'images/delta-sharing-java-logo.png'
95+
html_sidebars = {
96+
"**": ["logo-text.html", "globaltoc.html", "localtoc.html", "searchbox.html"]
97+
}
98+
99+
# Add any paths that contain custom static files (such as style sheets) here,
100+
# relative to this directory. They are copied after the builtin static files,
101+
# so a file named "default.css" will overwrite the builtin "default.css".
102+
html_static_path = ['_static']
93.5 KB
Loading
232 KB
Loading

‎docs/source/index.rst‎

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
.. delta-sharing-java-connector documentation master file, created by
2+
sphinx-quickstart on Wed Feb 2 11:01:42 2022.
3+
You can adapt this file completely to your liking, but it should at least
4+
contain the root `toctree` directive.
5+
6+
.. image:: images/delta-sharing-java-logo.png
7+
:width: 10%
8+
:alt: delta-sharing-java-connector
9+
:align: left
10+
11+
delta-sharing-java-connector is an extension to the `delta sharing <https://delta.io/sharing/>`__ that allows easy and simple ingestion of delta sharing tables in java applications.
12+
13+
.. image:: images/high-level-design.png
14+
:width: 100%
15+
:alt: High level design of the connector
16+
:align: center
17+
18+
delta-sharing-java-connector provides:
19+
- simple to use APIs;
20+
- local caching of remote files to limit egress/ingress costs;
21+
- readers based on a round robin streams to limit runtime memory requirements;
22+
23+
Since this is a java connector, all APIs can be used from either java or scala.
24+
The intended usage of the connector is to provide connectivity to remote data hosted on delta sharing
25+
from JVM based applications that may run only on a single node.
26+
27+
Note: the code does depend on spark binaries but does not require spark service running at serving time.
28+
29+
30+
Documentation
31+
=============
32+
33+
.. toctree::
34+
:maxdepth: 1
35+
:caption: Contents:
36+
37+
provider/json
38+
usage/DeltaSharing
39+
usage/TableReader
40+
41+
42+
Indices and tables
43+
==================
44+
45+
* :ref:`genindex`
46+
* :ref:`search`
47+
48+
49+
.. * :ref:`modindex`
50+
51+
52+
Project Support
53+
===============
54+
55+
Please note that all projects in the ``databrickslabs`` github space are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.
56+
57+
Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.

0 commit comments

Comments
 (0)