Name | -Default Value | -Description | -
---|---|---|
zeppelin.bigquery.project_id | -- | Google Project Id | -
zeppelin.bigquery.wait_time | -5000 | -Query Timeout in Milliseconds | -
zeppelin.bigquery.max_no_of_rows | -100000 | -Max result set size | -
[zeppelin-web/bower.json](https://github.com/apache/zeppelin/blob/master/zeppelin-web/bower.json)
(when built, [zeppelin-web/src/index.html](https://github.com/apache/zeppelin/blob/master/zeppelin-web/src/index.html)
will be changed automatically).
2. Add `language` field to `editor` object. Note that if you don't specify language field, your interpreter will use plain text mode for syntax highlighting. Let's say you want to set your language to `java`, then add:
- ```
+ ```json
"editor": {
"language": "java"
}
@@ -123,11 +166,24 @@ If you want to add a new set of syntax highlighting,
### Edit on double click
If your interpreter uses mark-up language such as markdown or HTML, set `editOnDblClick` to `true` so that text editor opens on pargraph double click and closes on paragraph run. Otherwise set it to `false`.
-```
+```json
"editor": {
"editOnDblClick": false
}
```
+
+### Completion key (Optional)
+By default, `Ctrl+dot(.)` brings autocompletion list in the editor.
+Through `completionKey`, each interpreter can configure autocompletion key.
+Currently `TAB` is only available option.
+
+```json
+"editor": {
+ "completionKey": "TAB"
+}
+```
+
+
## Install your interpreter binary
Once you have built your interpreter, you can place it under the interpreter directory with all its dependencies.
@@ -145,7 +201,7 @@ To configure your interpreter you need to follow these steps:
Property value is comma separated [INTERPRETER\_CLASS\_NAME].
For example,
- ```
+ ```xml
Default Value | |||||
---|---|---|---|---|---|
cassandra.cluster | +`cassandra.cluster` | Name of the Cassandra cluster to connect to | Test Cluster | ||
cassandra.compression.protocol | -On wire compression. Possible values are: NONE, SNAPPY, LZ4 | -NONE | +`cassandra.compression.protocol` | +On wire compression. Possible values are: `NONE`, `SNAPPY`, `LZ4` | +`NONE` |
cassandra.credentials.username | +`cassandra.credentials.username` | If security is enable, provide the login | none | ||
cassandra.credentials.password | +`cassandra.credentials.password` | If security is enable, provide the password | none | ||
cassandra.hosts | +`cassandra.hosts` |
Comma separated Cassandra hosts (DNS name or IP address).
- Ex: '192.168.0.12,node2,node3' + Ex: `192.168.0.12,node2,node3` |
- localhost | +`localhost` | |
cassandra.interpreter.parallelism | +`cassandra.interpreter.parallelism` | Number of concurrent paragraphs(queries block) that can be executed | 10 | ||
cassandra.keyspace | +`cassandra.keyspace` | Default keyspace to connect to. @@ -640,80 +637,80 @@ Below are the configuration parameters and their default value. in all of your queries | -system | +`system` | |
cassandra.load.balancing.policy | +`cassandra.load.balancing.policy` | - Load balancing policy. Default = new TokenAwarePolicy(new DCAwareRoundRobinPolicy()) - To Specify your own policy, provide the fully qualify class name (FQCN) of your policy. + Load balancing policy. Default = `new TokenAwarePolicy(new DCAwareRoundRobinPolicy())` + To Specify your own policy, provide the fully qualify class name (FQCN) of your policy. At runtime the interpreter will instantiate the policy using Class.forName(FQCN) | DEFAULT | ||
cassandra.max.schema.agreement.wait.second | +`cassandra.max.schema.agreement.wait.second` | Cassandra max schema agreement wait in second | 10 | ||
cassandra.pooling.core.connection.per.host.local | +`cassandra.pooling.core.connection.per.host.local` | Protocol V2 and below default = 2. Protocol V3 and above default = 1 | 2 | ||
cassandra.pooling.core.connection.per.host.remote | +`cassandra.pooling.core.connection.per.host.remote` | Protocol V2 and below default = 1. Protocol V3 and above default = 1 | 1 | ||
cassandra.pooling.heartbeat.interval.seconds | +`cassandra.pooling.heartbeat.interval.seconds` | Cassandra pool heartbeat interval in secs | 30 | ||
cassandra.pooling.idle.timeout.seconds | +`cassandra.pooling.idle.timeout.seconds` | Cassandra idle time out in seconds | 120 | ||
cassandra.pooling.max.connection.per.host.local | +`cassandra.pooling.max.connection.per.host.local` | Protocol V2 and below default = 8. Protocol V3 and above default = 1 | 8 | ||
cassandra.pooling.max.connection.per.host.remote | +`cassandra.pooling.max.connection.per.host.remote` | Protocol V2 and below default = 2. Protocol V3 and above default = 1 | 2 | ||
cassandra.pooling.max.request.per.connection.local | +`cassandra.pooling.max.request.per.connection.local` | Protocol V2 and below default = 128. Protocol V3 and above default = 1024 | 128 | ||
cassandra.pooling.max.request.per.connection.remote | +`cassandra.pooling.max.request.per.connection.remote` | Protocol V2 and below default = 128. Protocol V3 and above default = 256 | 128 | ||
cassandra.pooling.new.connection.threshold.local | +`cassandra.pooling.new.connection.threshold.local` | Protocol V2 and below default = 100. Protocol V3 and above default = 800 | 100 | ||
cassandra.pooling.new.connection.threshold.remote | +`cassandra.pooling.new.connection.threshold.remote` | Protocol V2 and below default = 100. Protocol V3 and above default = 200 | 100 | ||
cassandra.pooling.pool.timeout.millisecs | +`cassandra.pooling.pool.timeout.millisecs` | Cassandra pool time out in millisecs | 5000 | ||
cassandra.protocol.version | +`cassandra.protocol.version` | Cassandra binary protocol version | 4 |
Cassandra query default consistency level
- Available values: ONE, TWO, THREE, QUORUM, LOCAL_ONE, LOCAL_QUORUM, EACH_QUORUM, ALL + Available values: `ONE`, `TWO`, `THREE`, `QUORUM`, `LOCAL_ONE`, `LOCAL_QUORUM`, `EACH_QUORUM`, `ALL` |
- ONE | +`ONE` |
cassandra.query.default.fetchSize | +`cassandra.query.default.fetchSize` | Cassandra query default fetch size | 5000 | ||
cassandra.query.default.serial.consistency | +`cassandra.query.default.serial.consistency` |
Cassandra query default serial consistency level
- Available values: SERIAL, LOCAL_SERIAL + Available values: `SERIAL`, `LOCAL_SERIAL` |
- SERIAL | +`SERIAL` | |
cassandra.reconnection.policy | +`cassandra.reconnection.policy` | Cassandra Reconnection Policy. - Default = new ExponentialReconnectionPolicy(1000, 10 * 60 * 1000) - To Specify your own policy, provide the fully qualify class name (FQCN) of your policy. + Default = `new ExponentialReconnectionPolicy(1000, 10 * 60 * 1000)` + To Specify your own policy, provide the fully qualify class name (FQCN) of your policy. At runtime the interpreter will instantiate the policy using Class.forName(FQCN) | DEFAULT | ||
cassandra.retry.policy | +`cassandra.retry.policy` | Cassandra Retry Policy. - Default = DefaultRetryPolicy.INSTANCE - To Specify your own policy, provide the fully qualify class name (FQCN) of your policy. + Default = `DefaultRetryPolicy.INSTANCE` + To Specify your own policy, provide the fully qualify class name (FQCN) of your policy. At runtime the interpreter will instantiate the policy using Class.forName(FQCN) | DEFAULT | ||
cassandra.socket.connection.timeout.millisecs | +`cassandra.socket.connection.timeout.millisecs` | Cassandra socket default connection timeout in millisecs | 500 | ||
cassandra.socket.read.timeout.millisecs | +`cassandra.socket.read.timeout.millisecs` | Cassandra socket read timeout in millisecs | 12000 | ||
cassandra.socket.tcp.no_delay | +`cassandra.socket.tcp.no_delay` | Cassandra socket TCP no delay | true | ||
cassandra.speculative.execution.policy | +`cassandra.speculative.execution.policy` | Cassandra Speculative Execution Policy. - Default = NoSpeculativeExecutionPolicy.INSTANCE - To Specify your own policy, provide the fully qualify class name (FQCN) of your policy. + Default = `NoSpeculativeExecutionPolicy.INSTANCE` + To Specify your own policy, provide the fully qualify class name (FQCN) of your policy. At runtime the interpreter will instantiate the policy using Class.forName(FQCN) | DEFAULT | ||
cassandra.ssl.enabled | +`cassandra.ssl.enabled` | Enable support for connecting to the Cassandra configured with SSL. To connect to Cassandra configured with SSL use true @@ -798,14 +795,14 @@ Below are the configuration parameters and their default value. | false | ||
cassandra.ssl.truststore.path | +`cassandra.ssl.truststore.path` | Filepath for the truststore file to use for connection to Cassandra with SSL. | |||
cassandra.ssl.truststore.password | +`cassandra.ssl.truststore.password` | Password for the truststore file to use for connection to Cassandra with SSL. | diff --git a/docs/interpreter/groovy.md b/docs/interpreter/groovy.md index f64cbded242..679b5bcce6d 100644 --- a/docs/interpreter/groovy.md +++ b/docs/interpreter/groovy.md @@ -91,6 +91,7 @@ g.table( * `String g.getProperty('PROPERTY_NAME')` + ```groovy g.PROPERTY_NAME g.'PROPERTY_NAME' diff --git a/docs/interpreter/hbase.md b/docs/interpreter/hbase.md index 12e05174359..fd6334acebc 100644 --- a/docs/interpreter/hbase.md +++ b/docs/interpreter/hbase.md @@ -70,9 +70,9 @@ mvn clean package -DskipTests -Phadoop-2.6 -Dhadoop.version=2.6.0 -P build-distr If you want to connect to HBase running on a cluster, you'll need to follow the next step. ### Export HBASE_HOME -In **conf/zeppelin-env.sh**, export `HBASE_HOME` environment variable with your HBase installation path. This ensures `hbase-site.xml` can be loaded. +In `conf/zeppelin-env.sh`, export `HBASE_HOME` environment variable with your HBase installation path. This ensures `hbase-site.xml` can be loaded. -for example +For example ```bash export HBASE_HOME=/usr/lib/hbase diff --git a/docs/interpreter/ignite.md b/docs/interpreter/ignite.md index 0b4e27b2720..49e432f3622 100644 --- a/docs/interpreter/ignite.md +++ b/docs/interpreter/ignite.md @@ -42,8 +42,8 @@ In order to use Ignite interpreters, you may install Apache Ignite in some simpl > **Tip. If you want to run Ignite examples on the cli not IDE, you can export executable Jar file from IDE. Then run it by using below command.** -``` -$ nohup java -jar +```bash +nohup java -jar ``` ## Configuring Ignite Interpreter @@ -96,7 +96,7 @@ In order to execute SQL query, use ` %ignite.ignitesql ` prefix.Some SQL which executes every time after initialization of the interpreter (see Binding mode) | ||
default.statementPrecode | ++ | SQL code which executed before the SQL from paragraph, in the same database session (database connection) | +|||
default.completer.schemaFilters | @@ -192,6 +197,14 @@ There are more JDBC interpreter properties you can specify like below. | default.jceks.credentialKey | jceks credential key | ||
zeppelin.jdbc.interpolation | +Enables ZeppelinContext variable interpolation into paragraph text. Default value is false. | +||||
zeppelin.jdbc.maxConnLifetime | +Maximum of connection lifetime in milliseconds. A value of zero or less means the connection has an infinite lifetime. | +
Description | +This ```PUT``` method update paragraph contents using given id, e.g. {"text": "hello"}
+ |
+
URL | +```http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph/[paragraphId]``` | +
Success code | +200 | +
Bad Request code | +400 | +
Forbidden code | +403 | +
Not Found code | +404 | +
Fail code | +500 | +
sample JSON input | ++{ + "title": "Hello world", + "text": "println(\"hello world\")" +} |
+
sample JSON response | ++{ + "status": "OK", + "message": "" + } +} |
+
Description | -This ```POST``` method adds cron job by the given note id. + | This ```POST``` method adds cron job by the given note id. + Default value of ```releaseResource``` is ```false```. |
sample JSON input | -{"cron": "cron expression of note"} |
+ {"cron": "cron expression of note", "releaseResource": "false"} |
sample JSON response | @@ -1152,7 +1206,7 @@ Notebooks REST API supports the following operations: List, Create, Get, Delete,||
Description | This ```GET``` method gets cron job expression of given note id. - The body field of the returned JSON contains the cron expression. + The body field of the returned JSON contains the cron expression and ```releaseResource``` flag. | |
sample JSON response | -{"status": "OK", "body": "* * * * * ?"} |
+ +{ + "status": "OK", + "body": { + "cron": "0 0/1 * * * ?", + "releaseResource": true + } +} |
+ * How to use:
+ * {@code %jdbc.sql}
+ * {@code
+ * SELECT store_id, count(*)
+ * FROM retail_demo.order_lineitems_pxf
+ * GROUP BY store_id;
+ * }
+ *
g.html().with{ - * h1("hello") + * h1("hello") * h2("world") * }*/ @@ -316,12 +316,12 @@ public void run(String paragraphId) { @ZeppelinApi public void run(String noteId, String paragraphId, InterpreterContext context) { if (paragraphId.equals(context.getParagraphId())) { - throw new InterpreterException("Can not run current Paragraph"); + throw new RuntimeException("Can not run current Paragraph"); } List
+ * How to use:
+ * {@code %jdbc.sql}
+ * {@code
+ * SELECT store_id, count(*)
+ * FROM retail_demo.order_lineitems_pxf
+ * GROUP BY store_id;
+ * }
+ *
This implementation saves looked up ldap groups in Shiro Session to make them * easy to be looked up outside of this object - * - *
Sample config for shiro.ini: - * - *
[main] - * ldapRealm = org.apache.zeppelin.realm.LdapRealm - * ldapRealm.contextFactory.url = ldap://localhost:33389 - * ldapRealm.contextFactory.authenticationMechanism = simple - * ldapRealm.contextFactory.systemUsername = uid=guest,ou=people,dc=hadoop,dc= - * apache,dc=org - * ldapRealm.contextFactory.systemPassword = S{ALIAS=ldcSystemPassword} - * ldapRealm.userDnTemplate = uid={0},ou=people,dc=hadoop,dc=apache,dc=org - * # Ability to set ldap paging Size if needed default is 100 - * ldapRealm.pagingSize = 200 - * ldapRealm.authorizationEnabled = true - * ldapRealm.searchBase = dc=hadoop,dc=apache,dc=org - * ldapRealm.userSearchBase = dc=hadoop,dc=apache,dc=org - * ldapRealm.groupSearchBase = ou=groups,dc=hadoop,dc=apache,dc=org - * ldapRealm.userObjectClass = person - * ldapRealm.groupObjectClass = groupofnames - * # Allow userSearchAttribute to be customized - * ldapRealm.userSearchAttributeName = sAMAccountName - * ldapRealm.memberAttribute = member - * # force usernames returned from ldap to lowercase useful for AD - * ldapRealm.userLowerCase = true - * # ability set searchScopes subtree (default), one, base - * ldapRealm.userSearchScope = subtree; - * ldapRealm.groupSearchScope = subtree; - * ldapRealm.userSearchFilter = (&(objectclass=person)(sAMAccountName={0})) - * ldapRealm.groupSearchFilter = (&(objectclass=groupofnames)(member={0})) - * ldapRealm.memberAttributeValueTemplate=cn={0},ou=people,dc=hadoop,dc=apache, - * dc=org - * # enable support for nested groups using the LDAP_MATCHING_RULE_IN_CHAIN operator - * ldapRealm.groupSearchEnableMatchingRuleInChain = true * - *
# optional mapping from physical groups to logical application roles - * ldapRealm.rolesByGroup = \ LDN_USERS: user_role,\ NYK_USERS: user_role,\ - * HKG_USERS: user_role,\ GLOBAL_ADMIN: admin_role,\ DEMOS: self-install_role + *
Sample config for shiro.ini: * - *
# optional list of roles that are allowed to authenticate - * ldapRealm.allowedRolesForAuthentication = admin_role,user_role - * - *
ldapRealm.permissionsByRole=\ user_role = *:ToDoItemsJdo:*:*,\ - * *:ToDoItem:*:*; \ self-install_role = *:ToDoItemsFixturesService:install:* ; - * \ admin_role = * - * - *
[urls] - * **=authcBasic - * - *
securityManager.realms = $ldapRealm - * + *
+ * [main] + * ldapRealm = org.apache.zeppelin.realm.LdapRealm + * ldapRealm.contextFactory.url = ldap://localhost:33389 + * ldapRealm.contextFactory.authenticationMechanism = simple + * ldapRealm.contextFactory.systemUsername = uid=guest,ou=people,dc=hadoop,dc= apache,dc=org + * ldapRealm.contextFactory.systemPassword = S{ALIAS=ldcSystemPassword} + * ldapRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/zeppelin.jceks + * ldapRealm.userDnTemplate = uid={0},ou=people,dc=hadoop,dc=apache,dc=org + * # Ability to set ldap paging Size if needed default is 100 + * ldapRealm.pagingSize = 200 + * ldapRealm.authorizationEnabled = true + * ldapRealm.searchBase = dc=hadoop,dc=apache,dc=org + * ldapRealm.userSearchBase = dc=hadoop,dc=apache,dc=org + * ldapRealm.groupSearchBase = ou=groups,dc=hadoop,dc=apache,dc=org + * ldapRealm.userObjectClass = person + * ldapRealm.groupObjectClass = groupofnames + * # Allow userSearchAttribute to be customized + * ldapRealm.userSearchAttributeName = sAMAccountName + * ldapRealm.memberAttribute = member + * # force usernames returned from ldap to lowercase useful for AD + * ldapRealm.userLowerCase = true + * # ability set searchScopes subtree (default), one, base + * ldapRealm.userSearchScope = subtree; + * ldapRealm.groupSearchScope = subtree; + * ldapRealm.userSearchFilter = (&(objectclass=person)(sAMAccountName={0})) + * ldapRealm.groupSearchFilter = (&(objectclass=groupofnames)(member={0})) + * ldapRealm.memberAttributeValueTemplate=cn={0},ou=people,dc=hadoop,dc=apache,dc=org + * # enable support for nested groups using the LDAP_MATCHING_RULE_IN_CHAIN operator + * ldapRealm.groupSearchEnableMatchingRuleInChain = true + *
+ * # optional mapping from physical groups to logical application roles + * ldapRealm.rolesByGroup = \ LDN_USERS: user_role,\ NYK_USERS: user_role,\ HKG_USERS: user_role, + * \GLOBAL_ADMIN: admin_role,\ DEMOS: self-install_role + *
+ * # optional list of roles that are allowed to authenticate + * ldapRealm.allowedRolesForAuthentication = admin_role,user_role + *
+ * ldapRealm.permissionsByRole=\ user_role = *:ToDoItemsJdo:*:*,\*:ToDoItem:*:*; + * \ self-install_role = *:ToDoItemsFixturesService:install:* ; \ admin_role = * + *
+ * [urls] + * **=authcBasic + *
+ * securityManager.realms = $ldapRealm
*/
public class LdapRealm extends JndiLdapRealm {
@@ -134,11 +136,11 @@ public class LdapRealm extends JndiLdapRealm {
private static final String SUBJECT_USER_GROUPS = "subject.userGroups";
private static final String MEMBER_URL = "memberUrl";
private static final String POSIX_GROUP = "posixGroup";
-
+
// LDAP Operator '1.2.840.113556.1.4.1941'
// walks the chain of ancestry in objects all the way to the root until it finds a match
// see https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx
- private static final String MATCHING_RULE_IN_CHAIN_FORMAT =
+ private static final String MATCHING_RULE_IN_CHAIN_FORMAT =
"(&(objectClass=%s)(%s:1.2.840.113556.1.4.1941:=%s))";
private static Pattern TEMPLATE_PATTERN = Pattern.compile("\\{(\\d+?)\\}");
@@ -178,7 +180,7 @@ public class LdapRealm extends JndiLdapRealm {
private String groupIdAttribute = "cn";
- private String memberAttributeValuePrefix = "uid={0}";
+ private String memberAttributeValuePrefix = "uid=";
private String memberAttributeValueSuffix = "";
private final Map