Using Apache spark 2.0 and python I’ll show how to import a table from a relational database (using its jdbc driver) into a python dataframe and save it in a parquet file. In this demo the database is an oracle 12.x file``` from pyspark.sql import SparkSession

spark = SparkSession \
    .builder \
    .appName(“Python Spark SQL basic example”) \

df =“jdbc”).options(url=“jdbc:oracle:thin:ro/”, dbtable=“myuser.dim_country”, driver=“oracle.jdbc.OracleDriver”).load()