40

I am learning spark + scala with intelliJ , started with below small piece of code

import org.apache.spark.{SparkConf, SparkContext}

object ActionsTransformations {

  def main(args: Array[String]): Unit = {
    //Create a SparkContext to initialize Spark
    val conf = new SparkConf()
    conf.setMaster("local")
    conf.setAppName("Word Count")
    val sc = new SparkContext(conf)

    val numbersList = sc.parallelize(1.to(10000).toList)

    println(numbersList)
  }

}

when trying to run , getting below exception

Exception in thread "main" java.net.BindException: Can't assign requested address: Service 'sparkDriver' failed after 16 retries (on a random free port)! Consider explicitly setting the appropriate binding address for the service 'sparkDriver' (for example spark.driver.bindAddress for SparkDriver) to the correct binding address.
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:127)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:501)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1218)
    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:496)
    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:481)
    at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:965)
    at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:210)
    at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:353)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:446)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
    at java.lang.Thread.run(Thread.java:745)

Process finished with exit code 1

can any one suggest what to do .

2

9 Answers 9

58

Sometimes the problem is related to a connected VPN or something like that! Just disconnect your VPN or any other tool that may affect your networking, and then try again.

Sign up to request clarification or add additional context in comments.

4 Comments

Thanks, this worked for me. Was on company VPN, although it was working previously, it suddenly started throwing this error, so disconnected from it and it started working.
This. Why is this not the top answer.
this should be the top answer.
Try this before any other.
47

Add SPARK_LOCAL_IP in load-spark-env.sh file located at spark/bin directory

export SPARK_LOCAL_IP="127.0.0.1"

3 Comments

This helped in my case. master("local[*]") did not.
Works for me, but the file is under spark/conf. You can use echo "export SPARK_LOCAL_IP=127.0.0.1" >> $SPARK_HOME/conf/spark-env.sh after defining SPARK_HOME if not already done..
Doing this in load-spark-env.sh did not work for me but did work when I put it in .zshrc
42

Seems like you've used some old version of spark. In your case try to add this line:

conf.set("spark.driver.bindAddress", "127.0.0.1")

If you will use spark 2.0+ folowing should do the trick:

val spark: SparkSession = SparkSession.builder()
.appName("Word Count")
.master("local[*]")
.config("spark.driver.bindAddress", "127.0.0.1")
.getOrCreate()

Comments

15

The following should do the trick:

sudo hostname -s 127.0.0.1

1 Comment

This worked for me for github actions. Using this, didnt have to change the code for dev and integration env.
6

This worked for me for same error with pySpark:

from pyspark import SparkContext, SparkConf
conf_spark = SparkConf().set("spark.driver.host", "127.0.0.1")
sc = SparkContext(conf=conf_spark)

Comments

1

Add your hostname with your internal ip to /etc/hosts

More explanation

Get your hostname with this command:

hostname
# OR
cat /proc/sys/kernel/hostname

Get your internal ip with this command:

ip a

Change values and add it to /etc/hosts

${INTERNAL_IP} ${HOSTNAME}

Example:

192.168.1.5 bashiri_pc

Or (Previous line is better!)

127.0.0.1 bashiri_pc

Comments

0

I think setMaster and setAppName will return a new SparkConf object and the line conf.setMaster("local") will not effect on the conf variable. So you should try:

val conf = new SparkConf()
    .setMaster("local[*]")
    .setAppName("Word Count")

Comments

0

It seems like the ports which spark is trying to bind are already in use. Did this issue start happening after you ran spark successfully a few times? You may want to check if those previously-run-spark-processes are still alive, and are holding onto some ports (a simple jps / ps -ef should tell you that). If yes, kill those processes and try again.

1 Comment

Notice: Can't assign requested address: Service 'sparkDriver' failed after 16 retries (on a random free port). The error says it already tried 16 random free port!
0

For this issue, utilize the following code. It is identical to the code provided in another answer, but the indentation has been improved.

spark: SparkSession = (
    SparkSession.builder.appName("test")
    .master("local[*]")
    .config("spark.driver.bindAddress", "127.0.0.1")
    .getOrCreate()
)

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.