no se puede ejecutar pyspark después de la instalación

He copiado manualmente el spark-2.4.0-bin-hadoop2.7.tgz y extraído. Luego hice la entrada en .bash_profile como se muestra a continuación:

exportar SPARK_HOME = / Users / sumn / Pyspark / spark-2.4.0-bin-hadoop2.7 export PATH = $ SPARK_HOME / bin: $ PATH

Estoy seguro de que he instalado jdk.Response continuación:

ABCDEFGH: bin sumn $ java -version versión java “11” 2018-09-25 Java (TM) SE Runtime Environment 18.9 (comstackción 11 + 28) Java HotSpot (TM) 64-Bit Server VM 18.9 (comstackción 11 + 28, mixto modo)

Error abajo:

ABCDEFGH: bin sumn $ pyspark

 Python 3.7.0 (default, Jun 28 2018, 07:39:16) [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin Type "help", "copyright", "credits" or "license" for more information. Exception in thread "main" java.lang.ExceptionInInitializerError at org.apache.hadoop.util.StringUtils.(StringUtils.java:80) at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634) at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2422) at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2422) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2422) at org.apache.spark.SecurityManager.(SecurityManager.scala:79) at org.apache.spark.deploy.SparkSubmit.secMgr$lzycompute$1(SparkSubmit.scala:359) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$secMgr$1(SparkSubmit.scala:359) at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:367) at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:367) at scala.Option.map(Option.scala:146) at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:366) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:143) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2 at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3319) at java.base/java.lang.String.substring(String.java:1874) at org.apache.hadoop.util.Shell.(Shell.java:52) ... 23 more Traceback (most recent call last): File "/Users/sumn/Pyspark/spark-2.4.0-bin-hadoop2.7/python/pyspark/shell.py", line 38, in  SparkContext._ensure_initialized() File "/Users/sumn/Pyspark/spark-2.4.0-bin-hadoop2.7/python/pyspark/context.py", line 298, in _ensure_initialized SparkContext._gateway = gateway or launch_gateway(conf) File "/Users/sumn/Pyspark/spark-2.4.0-bin-hadoop2.7/python/pyspark/java_gateway.py", line 94, in launch_gateway raise Exception("Java gateway process exited before sending its port number") Exception: Java gateway process exited before sending its port number 

Intenté todo en las respuestas aquí, incluidas las respuestas de karma y sumn y nada funcionó. Agregué este camino a .bash_profile y funcionó para mí:

 export JAVA_HOME="/Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home -v 1.8" 

Espero que te funcione.