Skip to content

Commit 76575ee

Browse files
committed
[MINOR][SQL] Remove toLowerCase(Locale.ROOT) for CATALOG_IMPLEMENTATION
### What changes were proposed in this pull request? This PR aims to remove redundant `toLowerCase(Locale.ROOT)` transforms during checking `CATALOG_IMPLEMENTATION` values. ### Why are the changes needed? We already have `checkValues`. https://github.com/apache/spark/blob/9d9675922543e3e5c3b01023e5a756462a1fd308/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/StaticSQLConf.scala#L52 ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass the CIs. Manually I checked the following. I believe these are all occurrences. ``` $ git grep -C1 '.toLowerCase(Locale.ROOT)' | grep '"hive' repl/src/main/scala/org/apache/spark/repl/Main.scala- .get(CATALOG_IMPLEMENTATION.key, "hive") repl/src/main/scala/org/apache/spark/repl/Main.scala: .toLowerCase(Locale.ROOT) == "hive") { sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala: jsc.sc.conf.get(CATALOG_IMPLEMENTATION.key, "hive").toLowerCase(Locale.ROOT) == sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala- "hive" && sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSchemaInferenceSuite.scala- provider = Option("hive"), ``` ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#45184 from dongjoon-hyun/SPARK_CATALOG_IMPLEMENTATION. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent 8ede494 commit 76575ee

File tree

2 files changed

+3
-7
lines changed

2 files changed

+3
-7
lines changed

repl/src/main/scala/org/apache/spark/repl/Main.scala

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,6 @@ package org.apache.spark.repl
1919

2020
import java.io.File
2121
import java.net.URI
22-
import java.util.Locale
2322

2423
import scala.tools.nsc.GenericRunnerSettings
2524

@@ -104,9 +103,7 @@ object Main extends Logging {
104103
}
105104

106105
val builder = SparkSession.builder().config(conf)
107-
if (conf
108-
.get(CATALOG_IMPLEMENTATION.key, "hive")
109-
.toLowerCase(Locale.ROOT) == "hive") {
106+
if (conf.get(CATALOG_IMPLEMENTATION.key, "hive") == "hive") {
110107
if (SparkSession.hiveClassesArePresent) {
111108
// In the case that the property is not set at all, builder's config
112109
// does not have this value set to 'hive' yet. The original default

sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
package org.apache.spark.sql.api.r
1919

2020
import java.io.{ByteArrayInputStream, ByteArrayOutputStream, DataInputStream, DataOutputStream}
21-
import java.util.{Locale, Map => JMap}
21+
import java.util.{Map => JMap}
2222

2323
import scala.jdk.CollectionConverters._
2424
import scala.util.matching.Regex
@@ -46,8 +46,7 @@ private[sql] object SQLUtils extends Logging {
4646
enableHiveSupport: Boolean): SparkSession = {
4747
val spark =
4848
if (enableHiveSupport &&
49-
jsc.sc.conf.get(CATALOG_IMPLEMENTATION.key, "hive").toLowerCase(Locale.ROOT) ==
50-
"hive" &&
49+
jsc.sc.conf.get(CATALOG_IMPLEMENTATION.key, "hive") == "hive" &&
5150
// Note that the order of conditions here are on purpose.
5251
// `SparkSession.hiveClassesArePresent` checks if Hive's `HiveConf` is loadable or not;
5352
// however, `HiveConf` itself has some static logic to check if Hadoop version is

0 commit comments

Comments
 (0)