Open
Description
I'm facing a scenario where reading files fails, because certain folders within my source (data lake) are empty. The following code (snippet) will then fail:
foreach(var repoFolder in repoFolders)
{
var inputDf = spark
.Read()
.Option("multiline", "true")
.Json($"{cdsInputRootUri}/{snapshotId}/{repoFolder}/*/*.json");
...
In Python I could try
but using a C# try-catch
around my code won't work. What's the recommended way of converting such code to DotNet Spark?
for add in addrs:
try:
spark.read.format("parquet").load(add)
except:
print(add)
addrs.remove(add)
Metadata
Metadata
Assignees
Labels
No labels