博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Spark记录-官网学习配置篇(一)
阅读量:5037 次
发布时间:2019-06-12

本文共 45230 字,大约阅读时间需要 150 分钟。

参考http://spark.apache.org/docs/latest/configuration.html

Spark提供三个位置来配置系统:

  • 控制大多数应用程序参数,可以使用对象或通过Java系统属性进行设置。
  • 可以使用通过conf/spark-env.sh每个节点上的脚本来设置每台机器的设置,例如IP地址。
  • 日志可以通过配置log4j.properties

Spark属性控制大多数应用程序设置,并为每个应用程序单独配置。这些属性可以直接在一个 你 SparkContextSparkConf允许您通过该set()方法配置一些常用属性(例如主URL和应用程序名称)以及任意的键值对 。例如,我们可以用两个线程来初始化一个应用程序,如下所示:

请注意,我们使用local [2]运行,这意味着两个线程 - 表示“最小”并行性,这可以帮助检测在分布式环境中运行时只存在的错误。

val conf = new SparkConf() .setMaster("local[2]") .setAppName("CountingSheep") val sc = new SparkContext(conf)

动态加载Spark属性

在某些情况下,您可能希望避免对某些配置进行硬编码SparkConf。例如,如果你想运行不同的主人或不同数量的内存相同的应用程序。Spark允许你简单地创建一个空的conf:

val sc = new SparkContext(new SparkConf())

然后,您可以在运行时提供配置值:

./bin/spark-submit --name "My app" --master local[4] --conf spark.eventLog.enabled=false --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" myApp.jar

Spark shell和 工具支持两种动态加载配置的方式。第一个是命令行选项--master,如上所示。spark-submit可以使用该--conf 标志接受任何Spark属性,但对于启动Spark应用程序的属性使用特殊标志。运行./bin/spark-submit --help将显示这些选项的完整列表。

bin/spark-submit也将读取配置选项conf/spark-defaults.conf,其中每行包含一个由空格分隔的键和值。例如:

spark.master            spark://5.6.7.8:7077spark.executor.memory   4gspark.eventLog.enabled  truespark.serializer        org.apache.spark.serializer.KryoSerializer

指定为标志或属性文件中的任何值都将被传递给应用程序,并与通过SparkConf指定的那些值合并。直接在SparkConf上设置的属性具有最高的优先级,然后将标志传递给spark-submit或者spark-shell然后选择spark-defaults.conf文件中的选项。自早期版本的Spark以来,一些配置键已被重命名; 在这种情况下,旧的密钥名仍然被接受,但是比新密钥的任何实例的优先级低。

查看Spark属性

应用程序Web UI:http://IP:4040列出了“环境”选项卡中的Spark属性。这是一个有用的地方来检查,以确保您的属性已被正确设置。请注意,只有价值明确规定通过spark-defaults.confSparkConf或在命令行中会出现。对于所有其他配置属性,可以假设使用默认值。

可用的属性

大部分控制内部设置的属性都有合理的默认值。一些最常见的选项是:

应用程序属性

属性名称 默认 含义
spark.app.name none 您的应用程序的名称。这将出现在用户界面和日志数据中。
spark.driver.cores 1 用于驱动程序进程的核心数量,仅在集群模式下使用。
spark.driver.maxResultSize 1g 每个Spark动作(例如collect)的所有分区的序列化结果的总大小限制。应该至少有1M,或者0表示无限制。如果总数超过这个限制,工作将会被中止。具有高限制可能会导致驱动程序内存不足错误(取决于spark.driver.memory和JVM中对象的内存开销)。设置适当的限制可以保护驱动程序免于内存不足错误。
spark.driver.memory 1g 用于驱动程序进程的内存量,即SparkContext被初始化的地方。(例如1g2g)。 
注意:在客户端模式下,不能SparkConf 直接在应用程序中设置此配置,因为驱动程序JVM已经在此时启动。相反,请通过--driver-memory命令行选项或在您的默认属性文件中进行设置。
spark.executor.memory 1g 每个执行程序进程使用的内存量(例如2g8g)。
spark.extraListeners none 一个用逗号分隔的类实现的列表SparkListener; 当初始化SparkContext时,这些类的实例将被创建并注册到Spark的监听器总线。如果一个类有一个接受SparkConf的单参数构造函数,那么将会调用该构造函数;否则,将调用一个零参数的构造函数。如果没有找到有效的构造函数,则SparkContext创建将失败并出现异常。
spark.local.dir /tmp 用于Spark中“scratch”空间的目录,包括映射输出文件和存储在磁盘上的RDD。这应该在系统中的快速本地磁盘上。它也可以是不同磁盘上多个目录的逗号分隔列表。注意:在Spark 1.0及更高版本中,这将被群集管理器设置的SPARK_LOCAL_DIRS(Standalone,Mesos)或LOCAL_DIRS(YARN)环境变量覆盖。
spark.logConf false 当SparkContext启动时,将有效的SparkConf记录为INFO。
spark.master none 要连接到的群集管理器。查看列表 。
spark.submit.deployMode none Spark驱动程序的部署模式,无论是“客户端”还是“集群”,这意味着在集群内的一个节点上在本地(“客户端”)或远程(“集群”)启动驱动程序。
spark.log.callerContext none 在Yarn / HDFS上运行时将被写入Yarn RM日志/ HDFS审计日志的应用程序信息。它的长度取决于Hadoop配置hadoop.caller.context.max.size。它应该简洁,通常最多可以有50个字符。
spark.driver.supervise false 如果为true,则在非零退出状态失败时自动重新启动驱动程序。仅在Spark独立模式或Mesos集群部署模式下有效。

除此之外,以下属性也可用,在某些情况下可能有用:

运行时环境

属性名称 默认 含义
spark.driver.extraClassPath none 额外的类路径条目预先添加到驱动程序的类路径中。 
注意:在客户端模式下,不能SparkConf 直接在应用程序中设置此配置,因为驱动程序JVM已经在此时启动。相反,请通过--driver-class-path命令行选项或在您的默认属性文件中进行设置。
spark.driver.extraJavaOptions none 一串额外的JVM选项传递给驱动程序。例如,GC设置或其他日志记录。请注意,使用此选项设置最大堆大小(-Xmx)设置是非法的。最大堆大小设置可以spark.driver.memory在群集模式下设置,也可以通过--driver-memory客户端模式下的命令行选项进行设置。 
注意:在客户端模式下,不能SparkConf 直接在应用程序中设置此配置,因为驱动程序JVM已经在此时启动。相反,请通过--driver-java-options命令行选项或在您的默认属性文件中进行设置。
spark.driver.extraLibraryPath none 设置启动驱动程序JVM时使用的特殊库路径。 
注意:在客户端模式下,不能SparkConf 直接在应用程序中设置此配置,因为驱动程序JVM已经在此时启动。相反,请通过--driver-library-path命令行选项或在您的默认属性文件中进行设置。
spark.driver.userClassPathFirst false (实验)在驱动程序中加载类时,是否给用户添加的jar优先于Spark自带的jar。此功能可用于缓解Spark的依赖性和用户依赖性之间的冲突。它目前是一个实验性的功能。这仅用于集群模式。
spark.executor.extraClassPath none 额外的类路径条目预先添加到执行者的类路径。这主要是为了与旧版本的Spark向后兼容。用户通常不需要设置此选项。
spark.executor.extraJavaOptions none 一串额外的JVM选项传递给执行者。例如,GC设置或其他日志记录。请注意,使用此选项设置Spark属性或最大堆大小(-Xmx)设置是非法的。Spark属性应该使用SparkConf对象或与spark-submit脚本一起使用的spark-defaults.conf文件来设置。最大堆大小设置可以通过spark.executor.memory进行设置。
spark.executor.extraLibraryPath none 设置启动执行器JVM时使用的特殊库路径。
spark.executor.logs.rolling.maxRetainedFiles none 设置系统将要保留的最新滚动日志文件的数量。较旧的日志文件将被删除。默认情况下禁用。
spark.executor.logs.rolling.enableCompression false 启用执行程序日志压缩。如果启用,则执行器日志将被压缩。默认情况下禁用。
spark.executor.logs.rolling.maxSize none 设置执行程序日志将被滚动的文件的最大大小(以字节为单位)。滚动默认是禁用的。请参阅spark.executor.logs.rolling.maxRetainedFiles 旧日志的自动清理。
spark.executor.logs.rolling.strategy none 设置执行者日志的滚动策略。默认情况下它被禁用。它可以设置为“时间”(基于时间的滚动)或“大小”(基于大小的滚动)。对于“时间”,spark.executor.logs.rolling.time.interval用来设置滚动间隔。对于“大小”,使用spark.executor.logs.rolling.maxSize设置滚动的最大文件大小。
spark.executor.logs.rolling.time.interval daily 设置执行程序日志将被回滚的时间间隔。滚动默认是禁用的。有效值是dailyhourlyminutely或在几秒钟内的任何时间间隔。请参阅spark.executor.logs.rolling.maxRetainedFiles 旧日志的自动清理。
spark.executor.userClassPathFirst false (实验)与spark.driver.userClassPathFirst执行程序实例相同,但适用于执行程序实例。
spark.executorEnv.[EnvironmentVariableName] none 将指定的环境变量添加EnvironmentVariableName到Executor进程。用户可以指定其中的多个来设置多个环境变量。
spark.redaction.regex
(?i)secret|password
正则表达式来决定驱动程序和执行环境中哪些Spark配置属性和环境变量包含敏感信息。当这个正则表达式匹配一个属性键或值时,该值将从环境UI和各种日志(如YARN和事件日志)中进行编辑。
spark.python.profile false 在Python worker中启用配置文件,配置文件结果将显示sc.show_profiles(),或在驱动程序退出之前显示。它也可以被转储到磁盘sc.dump_profiles(path)。如果某些配置文件结果是手动显示的,则不会在驱动程序退出之前自动显示。默认情况下pyspark.profiler.BasicProfiler会使用,但是这可以通过将profiler类作为参数传递给SparkContext构造函数来重写。
spark.python.profile.dump none 在驱动程序退出之前用于转储配置文件结果的目录。结果将作为每个RDD的分离文件转储。他们可以通过ptats.Stats()加载。如果这是指定的,配置文件结果将不会自动显示。
spark.python.worker.memory 512m 聚合期间每个python工作进程使用的内存量,格式与JVM内存字符串(例如512m2g)相同。如果聚合过程中使用的内存超过这个数量,它会将数据泄漏到磁盘中。
spark.python.worker.reuse true 是否重用Python工作者。如果是,它将使用固定数量的Python工作者,不需要为每个任务fork()一个Python进程。如果有大的广播,这将是非常有用的,那么广播就不需要从JVM转移到Python工作人员的每个任务。
spark.files   逗号分隔的文件列表将放置在每个执行器的工作目录中。
spark.submit.pyFiles   逗号分隔的.zip,.egg或.py文件列表,放置在Python应用程序的PYTHONPATH上。
spark.jars   包含在驱动程序和执行程序类路径中的逗号分隔的本地jar列表。
spark.jars.packages   包含在驱动程序和执行程序类路径中的jar的Maven坐标的逗号分隔列表。坐标应该是groupId:artifactId:version。如果spark.jars.ivySettings 有文件将根据文件中的配置来解决,否则将在本地maven库中搜索工件,然后maven central,最后是由命令行选项给出的任何附加远程库--repositories。有关更多详细信息,请参阅 。
spark.jars.excludes   groupId:artifactId的逗号分隔列表,在解决提供的依赖性时排除,spark.jars.packages以避免依赖冲突。
spark.jars.ivy   指定Ivy用户目录的路径,用于本地Ivy缓存和包文件 spark.jars.packages。这将覆盖ivy.default.ivy.user.dir 默认为〜/ .ivy2 的Ivy属性。
spark.jars.ivySettings   常春藤设置文件的路径,以自定义指定的jar的分辨率,spark.jars.packages而不是内置的默认值,比如maven central。由命令行选项给出的其他存储库--repositories也将包括在内。对于允许Spark从防火墙后面解析工件(例如通过像Artifactory这样的内部工件服务器)非常有用。有关设置文件格式的详细信息,请参阅http://ant.apache.org/ivy/history/latest-milestone/settings.html
spark.pyspark.driver.python   Python二进制可执行文件用于驱动程序中的PySpark。(默认是spark.pyspark.python
spark.pyspark.python   Python二进制可执行文件用于驱动程序和执行程序中的PySpark。

Shuffle Behavior

Property Name Default Meaning
spark.reducer.maxSizeInFlight 48m Maximum size of map outputs to fetch simultaneously from each reduce task. Since each output requires us to create a buffer to receive it, this represents a fixed memory overhead per reduce task, so keep it small unless you have a large amount of memory.
spark.reducer.maxReqsInFlight Int.MaxValue This configuration limits the number of remote requests to fetch blocks at any given point. When the number of hosts in the cluster increase, it might lead to very large number of in-bound connections to one or more nodes, causing the workers to fail under load. By allowing it to limit the number of fetch requests, this scenario can be mitigated.
spark.shuffle.compress true Whether to compress map output files. Generally a good idea. Compression will use spark.io.compression.codec.
spark.shuffle.file.buffer 32k Size of the in-memory buffer for each shuffle file output stream. These buffers reduce the number of disk seeks and system calls made in creating intermediate shuffle files.
spark.shuffle.io.maxRetries 3 (Netty only) Fetches that fail due to IO-related exceptions are automatically retried if this is set to a non-zero value. This retry logic helps stabilize large shuffles in the face of long GC pauses or transient network connectivity issues.
spark.shuffle.io.numConnectionsPerPeer 1 (Netty only) Connections between hosts are reused in order to reduce connection buildup for large clusters. For clusters with many hard disks and few hosts, this may result in insufficient concurrency to saturate all disks, and so users may consider increasing this value.
spark.shuffle.io.preferDirectBufs true (Netty only) Off-heap buffers are used to reduce garbage collection during shuffle and cache block transfer. For environments where off-heap memory is tightly limited, users may wish to turn this off to force all allocations from Netty to be on-heap.
spark.shuffle.io.retryWait 5s (Netty only) How long to wait between retries of fetches. The maximum delay caused by retrying is 15 seconds by default, calculated as maxRetries * retryWait.
spark.shuffle.service.enabled false Enables the external shuffle service. This service preserves the shuffle files written by executors so the executors can be safely removed. This must be enabled if spark.dynamicAllocation.enabled is "true". The external shuffle service must be set up in order to enable it. See  for more information.
spark.shuffle.service.port 7337 Port on which the external shuffle service will run.
spark.shuffle.service.index.cache.entries 1024 Max number of entries to keep in the index cache of the shuffle service.
spark.shuffle.sort.bypassMergeThreshold 200 (Advanced) In the sort-based shuffle manager, avoid merge-sorting data if there is no map-side aggregation and there are at most this many reduce partitions.
spark.shuffle.spill.compress true Whether to compress data spilled during shuffles. Compression will usespark.io.compression.codec.
spark.shuffle.accurateBlockThreshold 100 * 1024 * 1024 When we compress the size of shuffle blocks in HighlyCompressedMapStatus, we will record the size accurately if it's above this config. This helps to prevent OOM by avoiding underestimating shuffle block size when fetch shuffle blocks.
spark.io.encryption.enabled false Enable IO encryption. Currently supported by all modes except Mesos. It's recommended that RPC encryption be enabled when using this feature.
spark.io.encryption.keySizeBits 128 IO encryption key size in bits. Supported values are 128, 192 and 256.
spark.io.encryption.keygen.algorithm HmacSHA1 The algorithm to use when generating the IO encryption key. The supported algorithms are described in the KeyGenerator section of the Java Cryptography Architecture Standard Algorithm Name Documentation.

Spark UI

Property Name Default Meaning
spark.eventLog.compress false Whether to compress logged events, if spark.eventLog.enabled is true. Compression will use spark.io.compression.codec.
spark.eventLog.dir file:///tmp/spark-events Base directory in which Spark events are logged, if spark.eventLog.enabled is true. Within this base directory, Spark creates a sub-directory for each application, and logs the events specific to the application in this directory. Users may want to set this to a unified location like an HDFS directory so history files can be read by the history server.
spark.eventLog.enabled false Whether to log Spark events, useful for reconstructing the Web UI after the application has finished.
spark.ui.enabled true Whether to run the web UI for the Spark application.
spark.ui.killEnabled true Allows jobs and stages to be killed from the web UI.
spark.ui.port 4040 Port for your application's dashboard, which shows memory and workload data.
spark.ui.retainedJobs 1000 How many jobs the Spark UI and status APIs remember before garbage collecting. This is a target maximum, and fewer elements may be retained in some circumstances.
spark.ui.retainedStages 1000 How many stages the Spark UI and status APIs remember before garbage collecting. This is a target maximum, and fewer elements may be retained in some circumstances.
spark.ui.retainedTasks 100000 How many tasks the Spark UI and status APIs remember before garbage collecting. This is a target maximum, and fewer elements may be retained in some circumstances.
spark.ui.reverseProxy false Enable running Spark Master as reverse proxy for worker and application UIs. In this mode, Spark master will reverse proxy the worker and application UIs to enable access without requiring direct access to their hosts. Use it with caution, as worker and application UI will not be accessible directly, you will only be able to access them through spark master/proxy public URL. This setting affects all the workers and application UIs running in the cluster and must be set on all the workers, drivers and masters.
spark.ui.reverseProxyUrl   This is the URL where your proxy is running. This URL is for proxy which is running in front of Spark Master. This is useful when running proxy for authentication e.g. OAuth proxy. Make sure this is a complete URL including scheme (http/https) and port to reach your proxy.
spark.ui.showConsoleProgress true Show the progress bar in the console. The progress bar shows the progress of stages that run for longer than 500ms. If multiple stages run at the same time, multiple progress bars will be displayed on the same line.
spark.worker.ui.retainedExecutors 1000 How many finished executors the Spark UI and status APIs remember before garbage collecting.
spark.worker.ui.retainedDrivers 1000 How many finished drivers the Spark UI and status APIs remember before garbage collecting.
spark.sql.ui.retainedExecutions 1000 How many finished executions the Spark UI and status APIs remember before garbage collecting.
spark.streaming.ui.retainedBatches 1000 How many finished batches the Spark UI and status APIs remember before garbage collecting.
spark.ui.retainedDeadExecutors 100 How many dead executors the Spark UI and status APIs remember before garbage collecting.

Compression and Serialization

Property Name Default Meaning
spark.broadcast.compress true Whether to compress broadcast variables before sending them. Generally a good idea. Compression will use spark.io.compression.codec.
spark.io.compression.codec lz4 The codec used to compress internal data such as RDD partitions, event log, broadcast variables and shuffle outputs. By default, Spark provides three codecs: lz4lzf, and snappy. You can also use fully qualified class names to specify the codec, e.g.org.apache.spark.io.LZ4CompressionCodec,org.apache.spark.io.LZFCompressionCodec, and org.apache.spark.io.SnappyCompressionCodec.
spark.io.compression.lz4.blockSize 32k Block size used in LZ4 compression, in the case when LZ4 compression codec is used. Lowering this block size will also lower shuffle memory usage when LZ4 is used.
spark.io.compression.snappy.blockSize 32k Block size used in Snappy compression, in the case when Snappy compression codec is used. Lowering this block size will also lower shuffle memory usage when Snappy is used.
spark.kryo.classesToRegister (none) If you use Kryo serialization, give a comma-separated list of custom class names to register with Kryo. See the  for more details.
spark.kryo.referenceTracking true Whether to track references to the same object when serializing data with Kryo, which is necessary if your object graphs have loops and useful for efficiency if they contain multiple copies of the same object. Can be disabled to improve performance if you know this is not the case.
spark.kryo.registrationRequired false Whether to require registration with Kryo. If set to 'true', Kryo will throw an exception if an unregistered class is serialized. If set to false (the default), Kryo will write unregistered class names along with each object. Writing class names can cause significant performance overhead, so enabling this option can enforce strictly that a user has not omitted classes from registration.
spark.kryo.registrator (none) If you use Kryo serialization, give a comma-separated list of classes that register your custom classes with Kryo. This property is useful if you need to register your classes in a custom way, e.g. to specify a custom field serializer. Otherwise spark.kryo.classesToRegister is simpler. It should be set to classes that extend . See the  for more details.
spark.kryo.unsafe false Whether to use unsafe based Kryo serializer. Can be substantially faster by using Unsafe Based IO.
spark.kryoserializer.buffer.max 64m Maximum allowable size of Kryo serialization buffer. This must be larger than any object you attempt to serialize and must be less than 2048m. Increase this if you get a "buffer limit exceeded" exception inside Kryo.
spark.kryoserializer.buffer 64k Initial size of Kryo's serialization buffer. Note that there will be one buffer per core on each worker. This buffer will grow up tospark.kryoserializer.buffer.max if needed.
spark.rdd.compress false Whether to compress serialized RDD partitions (e.g. forStorageLevel.MEMORY_ONLY_SER in Java and Scala or StorageLevel.MEMORY_ONLY in Python). Can save substantial space at the cost of some extra CPU time. Compression will use spark.io.compression.codec.
spark.serializer org.apache.spark.serializer.
JavaSerializer
Class to use for serializing objects that will be sent over the network or need to be cached in serialized form. The default of Java serialization works with any Serializable Java object but is quite slow, so we recommend  when speed is necessary. Can be any subclass of.
spark.serializer.objectStreamReset 100 When serializing using org.apache.spark.serializer.JavaSerializer, the serializer caches objects to prevent writing redundant data, however that stops garbage collection of those objects. By calling 'reset' you flush that info from the serializer, and allow old objects to be collected. To turn off this periodic reset set it to -1. By default it will reset the serializer every 100 objects.

Memory Management

### Execution Behavior

Property Name Default Meaning
spark.memory.fraction 0.6 Fraction of (heap space - 300MB) used for execution and storage. The lower this is, the more frequently spills and cached data eviction occur. The purpose of this config is to set aside memory for internal metadata, user data structures, and imprecise size estimation in the case of sparse, unusually large records. Leaving this at the default value is recommended. For more detail, including important information about correctly tuning JVM garbage collection when increasing this value, see .
spark.memory.storageFraction 0.5 Amount of storage memory immune to eviction, expressed as a fraction of the size of the region set aside by s​park.memory.fraction. The higher this is, the less working memory may be available to execution and tasks may spill to disk more often. Leaving this at the default value is recommended. For more detail, see .
spark.memory.offHeap.enabled false If true, Spark will attempt to use off-heap memory for certain operations. If off-heap memory use is enabled, then spark.memory.offHeap.size must be positive.
spark.memory.offHeap.size 0 The absolute amount of memory in bytes which can be used for off-heap allocation. This setting has no impact on heap memory usage, so if your executors' total memory consumption must fit within some hard limit then be sure to shrink your JVM heap size accordingly. This must be set to a positive value when spark.memory.offHeap.enabled=true.
spark.memory.useLegacyMode false ​Whether to enable the legacy memory management mode used in Spark 1.5 and before. The legacy mode rigidly partitions the heap space into fixed-size regions, potentially leading to excessive spilling if the application was not tuned. The following deprecated memory fraction configurations are not read unless this is enabled:spark.shuffle.memoryFraction
spark.storage.memoryFraction
spark.storage.unrollFraction
spark.shuffle.memoryFraction 0.2 (deprecated) This is read only if spark.memory.useLegacyMode is enabled. Fraction of Java heap to use for aggregation and cogroups during shuffles. At any given time, the collective size of all in-memory maps used for shuffles is bounded by this limit, beyond which the contents will begin to spill to disk. If spills are often, consider increasing this value at the expense of spark.storage.memoryFraction.
spark.storage.memoryFraction 0.6 (deprecated) This is read only if spark.memory.useLegacyMode is enabled. Fraction of Java heap to use for Spark's memory cache. This should not be larger than the "old" generation of objects in the JVM, which by default is given 0.6 of the heap, but you can increase it if you configure your own old generation size.
spark.storage.unrollFraction 0.2 (deprecated) This is read only if spark.memory.useLegacyMode is enabled. Fraction of spark.storage.memoryFraction to use for unrolling blocks in memory. This is dynamically allocated by dropping existing blocks when there is not enough free storage space to unroll the new block in its entirety.
spark.storage.replication.proactive false Enables proactive block replication for RDD blocks. Cached RDD block replicas lost due to executor failures are replenished if there are any existing available replicas. This tries to get the replication level of the block to the initial number.
Property Name Default Meaning
spark.broadcast.blockSize 4m Size of each piece of a block for TorrentBroadcastFactory. Too large a value decreases parallelism during broadcast (makes it slower); however, if it is too small, BlockManagermight take a performance hit.
spark.executor.cores 1 in YARN mode, all the available cores on the worker in standalone and Mesos coarse-grained modes. The number of cores to use on each executor. In standalone and Mesos coarse-grained modes, setting this parameter allows an application to run multiple executors on the same worker, provided that there are enough cores on that worker. Otherwise, only one executor per application will run on each worker.
spark.default.parallelism For distributed shuffle operations like reduceByKeyand join, the largest number of partitions in a parent RDD. For operations like parallelizewith no parent RDDs, it depends on the cluster manager:
  • Local mode: number of cores on the local machine
  • Mesos fine grained mode: 8
  • Others: total number of cores on all executor nodes or 2, whichever is larger
Default number of partitions in RDDs returned by transformations like joinreduceByKey, and parallelize when not set by user.
spark.executor.heartbeatInterval 10s Interval between each executor's heartbeats to the driver. Heartbeats let the driver know that the executor is still alive and update it with metrics for in-progress tasks. spark.executor.heartbeatInterval should be significantly less than spark.network.timeout
spark.files.fetchTimeout 60s Communication timeout to use when fetching files added through SparkContext.addFile() from the driver.
spark.files.useFetchCache true If set to true (default), file fetching will use a local cache that is shared by executors that belong to the same application, which can improve task launching performance when running many executors on the same host. If set to false, these caching optimizations will be disabled and all executors will fetch their own copies of files. This optimization may be disabled in order to use Spark local directories that reside on NFS filesystems (see  for more details).
spark.files.overwrite false Whether to overwrite files added through SparkContext.addFile() when the target file exists and its contents do not match those of the source.
spark.files.maxPartitionBytes 134217728 (128 MB) The maximum number of bytes to pack into a single partition when reading files.
spark.files.openCostInBytes 4194304 (4 MB) The estimated cost to open a file, measured by the number of bytes could be scanned in the same time. This is used when putting multiple files into a partition. It is better to over estimate, then the partitions with small files will be faster than partitions with bigger files.
spark.hadoop.cloneConf false If set to true, clones a new Hadoop Configurationobject for each task. This option should be enabled to work around Configuration thread-safety issues (see  for more details). This is disabled by default in order to avoid unexpected performance regressions for jobs that are not affected by these issues.
spark.hadoop.validateOutputSpecs true If set to true, validates the output specification (e.g. checking if the output directory already exists) used in saveAsHadoopFile and other variants. This can be disabled to silence exceptions due to pre-existing output directories. We recommend that users do not disable this except if trying to achieve compatibility with previous versions of Spark. Simply use Hadoop's FileSystem API to delete output directories by hand. This setting is ignored for jobs generated through Spark Streaming's StreamingContext, since data may need to be rewritten to pre-existing output directories during checkpoint recovery.
spark.storage.memoryMapThreshold 2m Size of a block above which Spark memory maps when reading a block from disk. This prevents Spark from memory mapping very small blocks. In general, memory mapping has high overhead for blocks close to or below the page size of the operating system.
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 1 The file output committer algorithm version, valid algorithm version number: 1 or 2. Version 2 may have better performance, but version 1 may handle failures better in certain situations, as per .

### Networking

Property Name Default Meaning
spark.rpc.message.maxSize 128 Maximum message size (in MB) to allow in "control plane" communication; generally only applies to map output size information sent between executors and the driver. Increase this if you are running jobs with many thousands of map and reduce tasks and see messages about the RPC message size.
spark.blockManager.port (random) Port for all block managers to listen on. These exist on both the driver and the executors.
spark.driver.blockManager.port (value of spark.blockManager.port) Driver-specific port for the block manager to listen on, for cases where it cannot use the same configuration as executors.
spark.driver.bindAddress (value of spark.driver.host) Hostname or IP address where to bind listening sockets. This config overrides the SPARK_LOCAL_IP environment variable (see below). 
It also allows a different address from the local one to be advertised to executors or external systems. This is useful, for example, when running containers with bridged networking. For this to properly work, the different ports used by the driver (RPC, block manager and UI) need to be forwarded from the container's host.
spark.driver.host (local hostname) Hostname or IP address for the driver. This is used for communicating with the executors and the standalone Master.
spark.driver.port (random) Port for the driver to listen on. This is used for communicating with the executors and the standalone Master.
spark.network.timeout 120s Default timeout for all network interactions. This config will be used in place ofspark.core.connection.ack.wait.timeout,spark.storage.blockManagerSlaveTimeoutMs,spark.shuffle.io.connectionTimeoutspark.rpc.askTimeout orspark.rpc.lookupTimeout if they are not configured.
spark.port.maxRetries 16 Maximum number of retries when binding to a port before giving up. When a port is given a specific value (non 0), each subsequent retry will increment the port used in the previous attempt by 1 before retrying. This essentially allows it to try a range of ports from the start port specified to port + maxRetries.
spark.rpc.numRetries 3 Number of times to retry before an RPC task gives up. An RPC task will run at most times of this number.
spark.rpc.retry.wait 3s Duration for an RPC ask operation to wait before retrying.
spark.rpc.askTimeout spark.network.timeout Duration for an RPC ask operation to wait before timing out.
spark.rpc.lookupTimeout 120s Duration for an RPC remote endpoint lookup operation to wait before timing out.

### Scheduling

Property Name Default Meaning
spark.cores.max (not set) When running on a  or a , the maximum amount of CPU cores to request for the application from across the cluster (not from each machine). If not set, the default will be spark.deploy.defaultCores on Spark's standalone cluster manager, or infinite (all available cores) on Mesos.
spark.locality.wait 3s How long to wait to launch a data-local task before giving up and launching it on a less-local node. The same wait will be used to step through multiple locality levels (process-local, node-local, rack-local and then any). It is also possible to customize the waiting time for each level by setting spark.locality.wait.node, etc. You should increase this setting if your tasks are long and see poor locality, but the default usually works well.
spark.locality.wait.node spark.locality.wait Customize the locality wait for node locality. For example, you can set this to 0 to skip node locality and search immediately for rack locality (if your cluster has rack information).
spark.locality.wait.process spark.locality.wait Customize the locality wait for process locality. This affects tasks that attempt to access cached data in a particular executor process.
spark.locality.wait.rack spark.locality.wait Customize the locality wait for rack locality.
spark.scheduler.maxRegisteredResourcesWaitingTime 30s Maximum amount of time to wait for resources to register before scheduling begins.
spark.scheduler.minRegisteredResourcesRatio 0.8 for YARN mode; 0.0 for standalone mode and Mesos coarse-grained mode The minimum ratio of registered resources (registered resources / total expected resources) (resources are executors in yarn mode, CPU cores in standalone mode and Mesos coarsed-grained mode ['spark.cores.max' value is total expected resources for Mesos coarse-grained mode] ) to wait for before scheduling begins. Specified as a double between 0.0 and 1.0. Regardless of whether the minimum ratio of resources has been reached, the maximum amount of time it will wait before scheduling begins is controlled by configspark.scheduler.maxRegisteredResourcesWaitingTime.
spark.scheduler.mode FIFO The  between jobs submitted to the same SparkContext. Can be set to FAIR to use fair sharing instead of queueing jobs one after another. Useful for multi-user services.
spark.scheduler.revive.interval 1s The interval length for the scheduler to revive the worker resource offers to run tasks.
spark.blacklist.enabled false If set to "true", prevent Spark from scheduling tasks on executors that have been blacklisted due to too many task failures. The blacklisting algorithm can be further controlled by the other "spark.blacklist" configuration options.
spark.blacklist.timeout 1h (Experimental) How long a node or executor is blacklisted for the entire application, before it is unconditionally removed from the blacklist to attempt running new tasks.
spark.blacklist.task.maxTaskAttemptsPerExecutor 1 (Experimental) For a given task, how many times it can be retried on one executor before the executor is blacklisted for that task.
spark.blacklist.task.maxTaskAttemptsPerNode 2 (Experimental) For a given task, how many times it can be retried on one node, before the entire node is blacklisted for that task.
spark.blacklist.stage.maxFailedTasksPerExecutor 2 (Experimental) How many different tasks must fail on one executor, within one stage, before the executor is blacklisted for that stage.
spark.blacklist.stage.maxFailedExecutorsPerNode 2 (Experimental) How many different executors are marked as blacklisted for a given stage, before the entire node is marked as failed for the stage.
spark.blacklist.application.maxFailedTasksPerExecutor 2 (Experimental) How many different tasks must fail on one executor, in successful task sets, before the executor is blacklisted for the entire application. Blacklisted executors will be automatically added back to the pool of available resources after the timeout specified byspark.blacklist.timeout. Note that with dynamic allocation, though, the executors may get marked as idle and be reclaimed by the cluster manager.
spark.blacklist.application.maxFailedExecutorsPerNode 2 (Experimental) How many different executors must be blacklisted for the entire application, before the node is blacklisted for the entire application. Blacklisted nodes will be automatically added back to the pool of available resources after the timeout specified byspark.blacklist.timeout. Note that with dynamic allocation, though, the executors on the node may get marked as idle and be reclaimed by the cluster manager.
spark.blacklist.killBlacklistedExecutors false (Experimental) If set to "true", allow Spark to automatically kill, and attempt to re-create, executors when they are blacklisted. Note that, when an entire node is added to the blacklist, all of the executors on that node will be killed.
spark.speculation false If set to "true", performs speculative execution of tasks. This means if one or more tasks are running slowly in a stage, they will be re-launched.
spark.speculation.interval 100ms How often Spark will check for tasks to speculate.
spark.speculation.multiplier 1.5 How many times slower a task is than the median to be considered for speculation.
spark.speculation.quantile 0.75 Fraction of tasks which must be complete before speculation is enabled for a particular stage.
spark.task.cpus 1 Number of cores to allocate for each task.
spark.task.maxFailures 4 Number of failures of any particular task before giving up on the job. The total number of failures spread across different tasks will not cause the job to fail; a particular task has to fail this number of attempts. Should be greater than or equal to 1. Number of allowed retries = this value - 1.
spark.task.reaper.enabled false Enables monitoring of killed / interrupted tasks. When set to true, any task which is killed will be monitored by the executor until that task actually finishes executing. See the other spark.task.reaper.* configurations for details on how to control the exact behavior of this monitoring. When set to false (the default), task killing will use an older code path which lacks such monitoring.
spark.task.reaper.pollingInterval 10s When spark.task.reaper.enabled = true, this setting controls the frequency at which executors will poll the status of killed tasks. If a killed task is still running when polled then a warning will be logged and, by default, a thread-dump of the task will be logged (this thread dump can be disabled via the spark.task.reaper.threadDump setting, which is documented below).
spark.task.reaper.threadDump true When spark.task.reaper.enabled = true, this setting controls whether task thread dumps are logged during periodic polling of killed tasks. Set this to false to disable collection of thread dumps.
spark.task.reaper.killTimeout -1 When spark.task.reaper.enabled = true, this setting specifies a timeout after which the executor JVM will kill itself if a killed task has not stopped running. The default value, -1, disables this mechanism and prevents the executor from self-destructing. The purpose of this setting is to act as a safety-net to prevent runaway uncancellable tasks from rendering an executor unusable.
spark.stage.maxConsecutiveAttempts 4 Number of consecutive stage attempts allowed before a stage is aborted.

### Dynamic Allocation

Property Name Default Meaning
spark.dynamicAllocation.enabled false Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. For more detail, see the description . 
This requires spark.shuffle.service.enabled to be set. The following configurations are also relevant:spark.dynamicAllocation.minExecutors,spark.dynamicAllocation.maxExecutors, andspark.dynamicAllocation.initialExecutors
spark.dynamicAllocation.executorIdleTimeout 60s If dynamic allocation is enabled and an executor has been idle for more than this duration, the executor will be removed. For more detail, see this .
spark.dynamicAllocation.cachedExecutorIdleTimeout infinity If dynamic allocation is enabled and an executor which has cached data blocks has been idle for more than this duration, the executor will be removed. For more details, see this .
spark.dynamicAllocation.initialExecutors spark.dynamicAllocation.minExecutors Initial number of executors to run if dynamic allocation is enabled. 
If `--num-executors` (or `spark.executor.instances`) is set and larger than this value, it will be used as the initial number of executors.
spark.dynamicAllocation.maxExecutors infinity Upper bound for the number of executors if dynamic allocation is enabled.
spark.dynamicAllocation.minExecutors 0 Lower bound for the number of executors if dynamic allocation is enabled.
spark.dynamicAllocation.schedulerBacklogTimeout 1s If dynamic allocation is enabled and there have been pending tasks backlogged for more than this duration, new executors will be requested. For more detail, see this .
spark.dynamicAllocation.sustainedSchedulerBacklogTimeout schedulerBacklogTimeout Same as spark.dynamicAllocation.schedulerBacklogTimeout, but used only for subsequent executor requests. For more detail, see this .

### Security

Property Name Default Meaning
spark.acls.enable false Whether Spark acls should be enabled. If enabled, this checks to see if the user has access permissions to view or modify the job. Note this requires the user to be known, so if the user comes across as null no checks are done. Filters can be used with the UI to authenticate and set the user.
spark.admin.acls Empty Comma separated list of users/administrators that have view and modify access to all Spark jobs. This can be used if you run on a shared cluster and have a set of administrators or devs who help debug when things do not work. Putting a "*" in the list means any user can have the privilege of admin.
spark.admin.acls.groups Empty Comma separated list of groups that have view and modify access to all Spark jobs. This can be used if you have a set of administrators or developers who help maintain and debug the underlying infrastructure. Putting a "*" in the list means any user in any group can have the privilege of admin. The user groups are obtained from the instance of the groups mapping provider specified by spark.user.groups.mapping. Check the entryspark.user.groups.mapping for more details.
spark.user.groups.mapping org.apache.spark.security.ShellBasedGroupsMappingProvider The list of groups for a user are determined by a group mapping service defined by the trait org.apache.spark.security.GroupMappingServiceProvider which can configured by this property. A default unix shell based implementation is provided org.apache.spark.security.ShellBasedGroupsMappingProviderwhich can be specified to resolve a list of groups for a user. Note:This implementation supports only a Unix/Linux based environment. Windows environment is currently not supported. However, a new platform/protocol can be supported by implementing the trait org.apache.spark.security.GroupMappingServiceProvider.
spark.authenticate false Whether Spark authenticates its internal connections. Seespark.authenticate.secret if not running on YARN.
spark.authenticate.secret None Set the secret key used for Spark to authenticate between components. This needs to be set if not running on YARN and authentication is enabled.
spark.network.crypto.enabled false Enable encryption using the commons-crypto library for RPC and block transfer service. Requires spark.authenticate to be enabled.
spark.network.crypto.keyLength 128 The length in bits of the encryption key to generate. Valid values are 128, 192 and 256.
spark.network.crypto.keyFactoryAlgorithm PBKDF2WithHmacSHA1 The key factory algorithm to use when generating encryption keys. Should be one of the algorithms supported by the javax.crypto.SecretKeyFactory class in the JRE being used.
spark.network.crypto.saslFallback true Whether to fall back to SASL authentication if authentication fails using Spark's internal mechanism. This is useful when the application is connecting to old shuffle services that do not support the internal Spark authentication protocol. On the server side, this can be used to block older clients from authenticating against a new shuffle service.
spark.network.crypto.config.* None Configuration values for the commons-crypto library, such as which cipher implementations to use. The config name should be the name of commons-crypto configuration without the "commons.crypto" prefix.
spark.authenticate.enableSaslEncryption false Enable encrypted communication when authentication is enabled. This is supported by the block transfer service and the RPC endpoints.
spark.network.sasl.serverAlwaysEncrypt false Disable unencrypted connections for services that support SASL authentication.
spark.core.connection.ack.wait.timeout spark.network.timeout How long for the connection to wait for ack to occur before timing out and giving up. To avoid unwilling timeout caused by long pause like GC, you can set larger value.
spark.modify.acls Empty Comma separated list of users that have modify access to the Spark job. By default only the user that started the Spark job has access to modify it (kill it for example). Putting a "*" in the list means any user can have access to modify it.
spark.modify.acls.groups Empty Comma separated list of groups that have modify access to the Spark job. This can be used if you have a set of administrators or developers from the same team to have access to control the job. Putting a "*" in the list means any user in any group has the access to modify the Spark job. The user groups are obtained from the instance of the groups mapping provider specified byspark.user.groups.mapping. Check the entry spark.user.groups.mapping for more details.
spark.ui.filters None Comma separated list of filter class names to apply to the Spark web UI. The filter should be a standard . Parameters to each filter can also be specified by setting a java system property of: 
spark.<class name of filter>.params='param1=value1,param2=value2'
For example: 
-Dspark.ui.filters=com.test.filter1 
-Dspark.com.test.filter1.params='param1=foo,param2=testing'
spark.ui.view.acls Empty Comma separated list of users that have view access to the Spark web ui. By default only the user that started the Spark job has view access. Putting a "*" in the list means any user can have view access to this Spark job.
spark.ui.view.acls.groups Empty Comma separated list of groups that have view access to the Spark web ui to view the Spark Job details. This can be used if you have a set of administrators or developers or users who can monitor the Spark job submitted. Putting a "*" in the list means any user in any group can view the Spark job details on the Spark web ui. The user groups are obtained from the instance of the groups mapping provider specified by spark.user.groups.mapping. Check the entryspark.user.groups.mapping for more details.

### TLS / SSL

Property Name Default Meaning
spark.ssl.enabled false Whether to enable SSL connections on all supported protocols. 
When spark.ssl.enabled is configured, spark.ssl.protocol is required. 
All the SSL settings like spark.ssl.xxx where xxx is a particular configuration property, denote the global configuration for all the supported protocols. In order to override the global configuration for the particular protocol, the properties must be overwritten in the protocol-specific namespace. 
Use spark.ssl.YYY.XXX settings to overwrite the global configuration for particular protocol denoted by YYY. Example values for YYY include fsuistandalone, and historyServer. See  for details on hierarchical SSL configuration for services.
spark.ssl.[namespace].port None The port where the SSL service will listen on. 
The port must be defined within a namespace configuration; see  for the available namespaces. 
When not set, the SSL port will be derived from the non-SSL port for the same service. A value of "0" will make the service bind to an ephemeral port.
spark.ssl.enabledAlgorithms Empty A comma separated list of ciphers. The specified ciphers must be supported by JVM. The reference list of protocols one can find on  page. Note: If not set, it will use the default cipher suites of JVM.
spark.ssl.keyPassword None A password to the private key in key-store.
spark.ssl.keyStore None A path to a key-store file. The path can be absolute or relative to the directory where the component is started in.
spark.ssl.keyStorePassword None A password to the key-store.
spark.ssl.keyStoreType JKS The type of the key-store.
spark.ssl.protocol None A protocol name. The protocol must be supported by JVM. The reference list of protocols one can find on  page.
spark.ssl.needClientAuth false Set true if SSL needs client authentication.
spark.ssl.trustStore None A path to a trust-store file. The path can be absolute or relative to the directory where the component is started in.
spark.ssl.trustStorePassword None A password to the trust-store.
spark.ssl.trustStoreType JKS The type of the trust-store.

转载于:https://www.cnblogs.com/xinfang520/p/7793349.html

你可能感兴趣的文章
APScheduler调度器
查看>>
设计模式——原型模式
查看>>
【jQuery UI 1.8 The User Interface Library for jQuery】.学习笔记.1.CSS框架和其他功能
查看>>
如何一个pdf文件拆分为若干个pdf文件
查看>>
web.xml中listener、 filter、servlet 加载顺序及其详解
查看>>
前端chrome浏览器调试总结
查看>>
获取手机验证码修改
查看>>
数据库连接
查看>>
python中数据的变量和字符串的常用使用方法
查看>>
等价类划分进阶篇
查看>>
delphi.指针.PChar
查看>>
Objective - C基础: 第四天 - 10.SEL类型的基本认识
查看>>
java 字符串转json,json转对象等等...
查看>>
极客前端部分题目收集【索引】
查看>>
第四天 selenium的安装及使用
查看>>
关于js的设计模式(简单工厂模式,构造函数模式,原型模式,混合模式,动态模式)...
查看>>
KMPnext数组循环节理解 HDU1358
查看>>
android调试debug快捷键
查看>>
【读书笔记】《HTTP权威指南》:Web Hosting
查看>>
Inoodb 存储引擎
查看>>