Lightning-fast unified analytics engine

A Standalone Job in Scala - Spark Screencast #4

In this Spark screencast, we create a standalone Apache Spark job in Scala. In the job, we create a spark context and read a file into an RDD of strings; then apply transformations and actions to the RDD and print out the results.

For more information and links to other Spark screencasts, check out the Spark documentation page.


Spark News Archive