Hello,
I have been trying to build spark documentation. Instruction followed from
link below
https://github.com/apache/spark/blob/master/docs/README.md
My Jekyll build fails below. (Error in the gist below)
https://gist.github.com/krishnakalyan3/d0e38852efe97d7899d737b83b8d8702
and
https://gist.github.com/krishnakalyan3/08f00f49a943e43600cbc6b21f307228
Could someone please advice on how to go about resolving this error?.
Regards,
Krishna
Krishna Kalyan 's gravatar image asked Jan 11 2017 at 10:22 in Spark-User by Krishna Kalyan

1 Answers

Are you using Java 8? Hyukjin fixed up all the errors due to the much
stricter javadoc 8, but it's possible some creep back in because there is
no Java 8 test now.
Sean Owen 's gravatar image answered Jan 11 2017 at 10:59 by Sean Owen

Related Discussions

  • H2O DataFrame To Spark RDD/DataFrame in Spark-user

  • Hi there, Is there any way to convert an H2O DataFrame to equivalent Spark RDD or DataFrame? I found a good documentation on "*Machine Learning with Sparkling Water: H2O + Spark*" here at. However, it discusses how to convert a Spark RDD or DaataFrame to H2O DatFrame but not the vice-versa. Regards, *Md. Rezaul Karim*, BSc, MSc PhD Researcher, INSIGHT Centre for Data Analytics National...

  • What Is "Developer API " In Spark Documentation? in Spark-user

  • Hi Many of spark documentation say "Developer API". What does that mean?...

  • How To Use The Spark Submit Script / Capability in Spark-user

  • There is a committed PR from Marcelo Vanzin addressing that capability: https://github.com/apache/spark/pull/3916/files Is there any documentation on how to use this? The PR itself has two comments asking for the docs that were not answered....

  • Spark Logging : Log4j.properties Or Log4j.xml in Spark-user

  • "-Dlog4j.configuration=". Is there any preference to using one over other? All the spark documentation talks about using "log4j.properties" only ( http://spark.apache.org/docs/latest/configuration.html#configuring-logging). So is only "log4j.properties" officially supported?...

  • Is There Documentation On Spark Sql Catalyst? in Spark-user

  • Where can I find a good documentation on sql catalyst? View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/is-there-documentation-on-spark-sql-catalyst-tp21232.html Sent from the Apache Spark User List mailing list archive at Nabble.com. To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]..

  • Kafka 0.9 And Spark-streaming-kafka_2.10 in Spark-user

  • Hi, I'm thinking of upgdrading our kafka cluster to 0.9. Will this be a problem for the Spark Streaming + Kafka Direct Approach Integration using artifact spark-streaming-kafka_2.10 (1.6.1)? groupId = org.apache.spark artifactId = spark-streaming-kafka_2.10 version = 1.6.1 Because the documentation states: Kafka: Spark Streaming 1.6.1 is compatible with Kafka 0.8.2.1. Thanks....

  • Documentation To Start With in Spark-user

  • Hi, Do any one have specific documentation for integrating Spark with hadoop distribution(does not already have spark) ? Thanks, Abhilash...

  • SPARK_WORKER_PORT (standalone Cluster) in Spark-user

  • Hi spark ! What is the purpose of the randomly assigned SPARK_WORKER_PORT from the documentation it sais to "join a cluster", but its not clear to me how a random port could be used to communicate with other members of a spark pool. This question might be grounded in my ignorance ... if so please just point me to the right documentation if im mising something obvious :) thanks ! ...

  • MLib Documentation Update Needed in Spark-user

  • The loss function here for logistic regression is confusing. It seems to imply that spark uses note quoted below (under Classification) says. We need to make this point more visible to avoid confusion. Better yet, we should replace the loss function listed with that for 0, 1 no matter how mathematically inconvenient, since that is what is actually implemented in Spark. More problematic...

  • Still Struggling With Building Documentation in Spark-user

  • I finally came to realize that there is a special maven target to build the scaladocs, although arguably a very unintuitive on: mvn verify. So now I have scaladocs for each package, but not for the whole spark project. Specifically, build/docs/api/scala/index.html is missing. Indeed the whole build/docs/api directory referenced in api.html is missing. How do I build it? Alex Baretta...

  • [Streaming] Non-blocking Recommendation In Custom Receiver Documentation And KinesisReceiver's Worker.run Blocking Calll in Spark-user

  • Hi all Reading through Spark streaming's custom receiver documentation, it is recommended that onStart and onStop methods should not block indefinitely. However, looking at the source code of KinesisReceiver, the onStart method calls worker.run that blocks until worker is shutdown (via a call to So, my question is what are the ramifications of making a blocking call in in KinesisReceiver...

  • How To Report Documentation Bug? in Spark-user

  • http://spark.apache.org/docs/latest/quick-start.html#standalone-applications Click on java tab There is a bug in the maven section 1.1.0-SNAPSHOT Should be 1.1.0 Hope this helps Andy...

  • [Structured Streaming] Using File Sink To Store To Hive Table. in Spark-user

  • Hi, I'm thinking of using Structured Streaming instead of old streaming, but I need to be able to save results to Hive table. Documentation for file sink says( http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#output-sinks): "Supports writes to partitioned tables. ". But being able to write to partitioned directories is not enough to write to the table: someone needs...

  • Yarn Documentation in Spark-user

  • We do not requesting container resources based on the number of cores. Thus the numbers of cores given via command line arguments cannot be guaranteed." can someone explain this a bit more? is it simply a reflection of the fact that yarn could come back with the requested number of containers, but those could have less cores than requested? (and if so, spark will not ask for more containers...

  • Outdated Documentation? SparkSession in Spark-user

  • Hi! In this doc http://spark.apache.org/docs/latest/programming-guide.html#initializing-spark initialization is described by SparkContext. Do you think is it reasonable to change it to SparkSession or just mentioned it at the end? I can prepare it and make PR for this, but want to know your opinion at first. The same for quickstart: http://spark.apache.org/docs/latest/quick-start.html#self-...

  • Best Practices Repartition Key in Spark-user

  • Hi, I'm looking for documentation or best practices about choosing a key or keys for repartition of dataframe or rdd Thank you MBAREK nihed M'BAREK Med Nihed, Fedora Ambassador, TUNISIA, Northern Africa http://www.nihed.com...

  • High-Level Implementation Documentation in Spark-user

  • Hey all, Other than reading the source (not a bad idea in and of iteself; something I will get to soon) I was hoping to find some high-level implementation documentation. Can anyone point me to such a document(s)? Thank you in advance. -Kenny :SIG:!0x1066BA71A5F56C58!: To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]..

  • Support For Time Column Type? in Spark-user

  • Hi, I don't see any mention of a time type in the documentation (there is DateType and TimestampType, but not TimeType), and have been unable to find any documentation about whether this will be supported in the future. Does anyone know if this is currently supported or will be supported in the future?...

  • Random Forest Implementation Details in Spark-user

  • Hi! I'm playing with random forest implementation in Apache Spark. First impression is - it is not fast :-( Does somebody know how random forest is parallelized in Spark? I mean both fitting and predicting. And also what do mean this parameters? Didn't find documentation for them. maxMemoryInMB%6, cacheNodeIdslse, checkpointInterval Sergey...

  • Is JavaSparkContext.wholeTextFiles Distributed? in Spark-user

  • Hi guys, I'm trying to read many filed from s3 using JavaSparkContext.wholeTextFiles(...). Is that executed in a distributed manner? Please give me a link to the place in documentation where it's specified. Thanks, Vadim. To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]..