Spark ошибка во время передачи файла

I had a similar problem to this. What version of openfire are you on?  Also, when the file is being transferred, is the file also open?

If you have access to openfire, I would check the logs and see what the error is returning.  On ours, I started to think the problem was with the file.  I renamed it, stored it somewhere else and away it went.  The problem could have been an issue with a corrupted temp file when the file was open.  

Also with Windows 7 and above you need to make sure that the downloads folder location in the «settings» of the user receiving it is in an area they have write access to.  When I first installed spark on a Windows 7, I found that the downloads folder was sitting under the admin profile instead of the user profile. 

Hope this is of some help. 


Was this post helpful?
thumb_up
thumb_down

All the cases are performed in GCP data proc cluster with a master and 4 executors.

Case 1: When I execute a runnable JAR file with the spark-submit in Local mode with spark-config as

SparkConf config = new SparkConf().setMaster("local[*]").setAppName("ANNCLUSTERED");

spark-submit /home/aavashbhandari/dataset/RunSCL.jar /home/aavashbhandari/dataset/California_Nodes.txt /home/aavashbhandari/dataset/California_Edges.txt /home/aavashbhandari/dataset/California_part_4.txt 4 10000 10000 1
and run it on local mode, it runs without any issues.

Case 2: Setting the spark configuration on the source as:

        SparkConf config = new SparkConf().setAppName("ANNCLUSTERD").set("spark.locality.wait", "0")
                .set("spark.submit.deployMode", "cluster").set("spark.driver.maxResultSize", "6g")
                .set("spark.executor.memory", "6g").setMaster("spark://34.80.87.222:7077").set("spark.cores.max", "8")
                .set("spark.blockManager.port", "10025").set("spark.driver.blockManager.port", "10026")
                .set("spark.driver.port", "10027").set("spark.shuffle.service.enabled", "false")
                .set("spark.dynamicAllocation.enabled", "false");

and run the same spark-submit command I am getting this error:

22/11/15 08:45:37 ERROR org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
22/11/15 08:45:37 WARN org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend: Application ID is not initialized yet.
22/11/15 08:45:37 INFO org.spark_project.jetty.server.AbstractConnector: Stopped Spark@40147317{HTTP/1.1, (http/1.1)}{0.0.0.0:0}
22/11/15 08:45:37 WARN org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
22/11/15 08:45:38 ERROR org.apache.spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
        at scala.Predef$.require(Predef.scala:281)
        at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:517)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at main.GraphNetworkSCLAlgorithm.main(GraphNetworkSCLAlgorithm.java:296)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:855)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:939)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:948)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
        at scala.Predef$.require(Predef.scala:281)
        at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:517)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at main.GraphNetworkSCLAlgorithm.main(GraphNetworkSCLAlgorithm.java:296)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:855)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:939)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:948)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Case 3: When I try to run the same JAR file in YARN cluster mode with 4 executors changing the config file to:

SparkConf config=new SparkConf().setAppName("ANN-SCL");

and the submit command I get this following error:

spark-submit --class main.GraphNetworkSCLAlgorithm --master yarn --deploy-mode cluster --num-executors 4 /home/aavashbhandari/dataset/RunSCL.jar /home/aavashbhandari/dataset/California_Nodes.txt /home/aavashbhandari/dataset/California_Edges.txt /home/aavashbhandari/dataset/California_part_4.txt 4 10000 10000 1

I get the following error, while the job gets accepted it never passes to the running stage.

Application application_1668488196607_0004 failed 2 times due to AM Container for appattempt_1668488196607_0004_000002 exited with exitCode: 13
Failing this attempt.Diagnostics: [2022-11-15 08:16:01.044]Exception from container-launch.
Container id: container_1668488196607_0004_02_000001
Exit code: 13
[2022-11-15 08:16:01.086]Container exited with a non-zero exit code 13. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at java.io.FileReader.<init>(FileReader.java:58)
at framework.UtilsManagement.readEdgeTxtFileReturnGraph(UtilsManagement.java:382)
at main.GraphNetworkSCLAlgorithm.main(GraphNetworkSCLAlgorithm.java:91)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:686)
java.io.FileNotFoundException: /home/aavashbhandari/dataset/California_Edges.txt (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at java.io.FileReader.<init>(FileReader.java:58)
at edu.ufl.cise.bsmock.graph.YenGraph.readFromFile(YenGraph.java:171)
at edu.ufl.cise.bsmock.graph.YenGraph.<init>(YenGraph.java:21)
at main.GraphNetworkSCLAlgorithm.main(GraphNetworkSCLAlgorithm.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:686)
java.io.FileNotFoundException: /home/aavashbhandari/dataset/California_Nodes.txt (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at java.io.FileReader.<init>(FileReader.java:58)
at framework.UtilsManagement.readTxtNodeFile(UtilsManagement.java:210)
at main.GraphNetworkSCLAlgorithm.main(GraphNetworkSCLAlgorithm.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:686)
22/11/15 08:16:00 ERROR org.apache.spark.deploy.yarn.ApplicationMaster: Uncaught exception:
java.util.concurrent.TimeoutException: Futures timed out after [100000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:259)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:263)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:470)
at org.apache.spark.deploy.yarn.ApplicationMaster.runImpl(ApplicationMaster.scala:305)
at org.apache.spark.deploy.yarn.ApplicationMaster.$anonfun$run$1(ApplicationMaster.scala:245)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:781)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1938)
at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:780)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:245)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:805)
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
[2022-11-15 08:16:01.087]Container exited with a non-zero exit code 13. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at java.io.FileReader.<init>(FileReader.java:58)
at framework.UtilsManagement.readEdgeTxtFileReturnGraph(UtilsManagement.java:382)
at main.GraphNetworkSCLAlgorithm.main(GraphNetworkSCLAlgorithm.java:91)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:686)
java.io.FileNotFoundException: /home/aavashbhandari/dataset/California_Edges.txt (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at java.io.FileReader.<init>(FileReader.java:58)
at edu.ufl.cise.bsmock.graph.YenGraph.readFromFile(YenGraph.java:171)
at edu.ufl.cise.bsmock.graph.YenGraph.<init>(YenGraph.java:21)
at main.GraphNetworkSCLAlgorithm.main(GraphNetworkSCLAlgorithm.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:686)
java.io.FileNotFoundException: /home/aavashbhandari/dataset/California_Nodes.txt (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at java.io.FileReader.<init>(FileReader.java:58)
at framework.UtilsManagement.readTxtNodeFile(UtilsManagement.java:210)
at main.GraphNetworkSCLAlgorithm.main(GraphNetworkSCLAlgorithm.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:686)
22/11/15 08:16:00 ERROR org.apache.spark.deploy.yarn.ApplicationMaster: Uncaught exception:
java.util.concurrent.TimeoutException: Futures timed out after [100000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:259)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:263)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:470)
at org.apache.spark.deploy.yarn.ApplicationMaster.runImpl(ApplicationMaster.scala:305)
at org.apache.spark.deploy.yarn.ApplicationMaster.$anonfun$run$1(ApplicationMaster.scala:245)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:781)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1938)
at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:780)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:245)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:805)
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
For more detailed output, check the application tracking page: http://spark-cluster-m:8188/applicationhistory/app/application_1668488196607_0004 Then click on links to logs of each attempt.
. Failing the application.

I am using the absolute file path to pass the file as an argument and but still running on the error. I am worried that my configurations are not correct

I have got some method from the stackoverflow about uploading file in spark java, but I try and did’t work.

post("/upload",
          (request, response) -> {

            if (request.raw().getAttribute("org.eclipse.jetty.multipartConfig") == null) {
                MultipartConfigElement multipartConfigElement = new MultipartConfigElement(System.getProperty("java.io.tmpdir"));
                request.raw().setAttribute("org.eclipse.jetty.multipartConfig", multipartConfigElement);
            }
            Part file = request.raw().getPart("file");
            Part name = request.raw().getPart("name");
            String filename = file.getName();
            if(name.getSize() > 0){
                try{
                    filename = IOUtils.toString(name.getInputStream(), StandardCharsets.UTF_8);
                } catch(Exception e){
                    e.printStackTrace();
                }
            }
            Path filePath = Paths.get(".",filename);
            Files.copy(file.getInputStream(),filePath);
            return "Done!";
          });

}

I use postman to send the message

enter image description here

And I got the Error like this

enter image description here

The error points to the code Part file = request.raw().getPart("file");

asked Jan 12, 2016 at 14:57

L. YanJun's user avatar

post("/upload", "multipart/form-data", (request, response) -> {

String location = "image";          // the directory location where files will be stored
long maxFileSize = 100000000;       // the maximum size allowed for uploaded files
long maxRequestSize = 100000000;    // the maximum size allowed for multipart/form-data requests
int fileSizeThreshold = 1024;       // the size threshold after which files will be written to disk

MultipartConfigElement multipartConfigElement = new MultipartConfigElement(
     location, maxFileSize, maxRequestSize, fileSizeThreshold);
 request.raw().setAttribute("org.eclipse.jetty.multipartConfig",
     multipartConfigElement);

Collection<Part> parts = request.raw().getParts();
for (Part part : parts) {
   System.out.println("Name: " + part.getName());
   System.out.println("Size: " + part.getSize());
   System.out.println("Filename: " + part.getSubmittedFileName());
}

String fName = request.raw().getPart("file").getSubmittedFileName();
System.out.println("Title: " + request.raw().getParameter("title"));
System.out.println("File: " + fName);

Part uploadedFile = request.raw().getPart("file");
Path out = Paths.get("image/" + fName);
try (final InputStream in = uploadedFile.getInputStream()) {
   Files.copy(in, out);
   uploadedFile.delete();
}
// cleanup
multipartConfigElement = null;
parts = null;
uploadedFile = null;

return "OK";
});

This will work well, I found it in https://groups.google.com/forum/#!msg/sparkjava/fjO64BP1UQw/CsxdNVz7qrAJ

answered Jan 15, 2016 at 1:51

L. YanJun's user avatar

L. YanJunL. YanJun

2773 silver badges15 bronze badges

Ignite Realtime Community Forums

Loading

Ignite Realtime

powered by Jive Software

  1. Home
  2. Projects
  3. Downloads
  4. Community
  5. Fans
  6. Support
  7. About

Member since Oct 2015 · 6 posts
Group memberships: Members

Subject: File transfer problem(Spark to MiniClient)


Dear sirs (madams ;-) ),
I meet a problem when i send file from Spark to MiniClient,
File transfer request is accepted in MiniClient side, but the transfer is no progress(still 0%).

I tested for :
1, MiniClient to MiniClient file transfer WORKS FINE.
2, MiniClient to Spark file transfer WORKS FINE.

ONLY Spark to MiniClient DOES NOT WORK.

This is the debug info below:

m31@mym3.org(Spark)  SEND FILE TO  robot@mym3.org(MiniClient)   DOES NOT WORK

RECV: <iq xmlns=»jabber:client» type=»set» from=»m31@mym3.org/Spark» to=»robot@mym3.org/MiniClient» id=»5E6vG-29″><si xmlns=»http://jabber.org/protocol/si» mime-type=»image/jpeg» profile=»http://jabber.org/protocol/si/profile/file-transfer» id=»jsi_6135981304854517468″><file xmlns=»http://jabber.org/protocol/si/profile/file-transfer» size=»712676″ name=»4antongorlin-lavacreeksportstephensnswaustralia.jpg»><desc>Sending file</desc></file><feature xmlns=»http://jabber.org/protocol/feature-neg»><x xmlns=»jabber:x:data» type=»form»><field type=»list-single» var=»stream-method»><option><value>http://jabber.org/protocol/bytestreams</value></option><option><value>http://jabber.org/protocol/ibb</value></option></field></x></feature></si></iq>

SEND: <iq id=»5E6vG-29″ to=»m31@mym3.org/Spark» type=»result»><si xmlns=»http://jabber.org/protocol/si» id=»jsi_6135981304854517468″><feature xmlns=»http://jabber.org/protocol/feature-neg»><x xmlns=»jabber:x:data» type=»submit»><field var=»stream-method»><value>http://jabber.org/protocol/bytestreams</value></field></x></feature></si></iq>

RECV: <iq xmlns=»jabber:client» type=»get» from=»m31@mym3.org/Spark» to=»robot@mym3.org/MiniClient» id=»5E6vG-30″><query xmlns=»http://jabber.org/protocol/disco#info» /></iq>

SEND: <iq to=»m31@mym3.org/Spark» id=»5E6vG-30″ type=»result»><query xmlns=»http://jabber.org/protocol/disco#info»><identity type=»pc» name=»MiniClient» category=»client» /><feature var=»http://jabber.org/protocol/disco#info» /><feature var=»http://jabber.org/protocol/disco#items» /><feature var=»http://jabber.org/protocol/muc» /></query></iq>

m31@mym3.org(MiniClient) SEND FILE TO robot@mym3.org(MiniClient) THIS WORKS FINE

RECV: <iq xmlns=»jabber:client» type=»set» from=»m31@mym3.org/MiniClient» to=»robot@mym3.org/MiniClient» id=»agsXMPP_10″><si xmlns=»http://jabber.org/protocol/si» profile=»http://jabber.org/protocol/si/profile/file-transfer» id=»85a5052f-9dcf-43dc-b196-ab4e31dd388c»><file xmlns=»http://jabber.org/protocol/si/profile/file-transfer» size=»712676″ name=»4antongorlin-lavacreeksportstephensnswaustralia.jpg»><range /></file><feature xmlns=»http://jabber.org/protocol/feature-neg»><x xmlns=»jabber:x:data» type=»form»><field type=»list-single» var=»stream-method»><option><value>http://jabber.org/protocol/bytestreams</value></option></field></x></feature></si></iq>

SEND: <iq id=»agsXMPP_10″ to=»m31@mym3.org/MiniClient» type=»result»><si xmlns=»http://jabber.org/protocol/si» id=»85a5052f-9dcf-43dc-b196-ab4e31dd388c»><feature xmlns=»http://jabber.org/protocol/feature-neg»><x xmlns=»jabber:x:data» type=»submit»><field var=»stream-method»><value>http://jabber.org/protocol/bytestreams</value></field></x></feature></si></iq>

RECV: <iq xmlns=»jabber:client» type=»set» from=»m31@mym3.org/MiniClient» to=»robot@mym3.org/MiniClient» id=»agsXMPP_11″><query xmlns=»http://jabber.org/protocol/bytestreams» sid=»85a5052f-9dcf-43dc-b196-ab4e31dd388c»><streamhost host=»proxy.mym3.org» port=»7777″ jid=»proxy.mym3.org» /></query></iq>

robot@mym3.org(MiniClient) SEND FILE TO  m31@mym3.org(Spark) THIS WORKS FINE

SEND: <iq id=»agsXMPP_13″ to=»m31@mym3.org/Spark» type=»set»><si xmlns=»http://jabber.org/protocol/si» profile=»http://jabber.org/protocol/si/profile/file-transfer» id=»a790a9d1-a7af-410d-9ea1-022d9744ed6a»><file xmlns=»http://jabber.org/protocol/si/profile/file-transfer» name=»2_louisenadeau-orangepoppy.jpg» size=»524170″><range /></file><feature xmlns=»http://jabber.org/protocol/feature-neg»><x xmlns=»jabber:x:data» type=»form»><field type=»list-single» var=»stream-method»><option><value>http://jabber.org/protocol/bytestreams</value></option></field></x></feature></si></iq>

RECV: <iq xmlns=»jabber:client» type=»result» from=»m31@mym3.org/Spark» to=»robot@mym3.org/MiniClient» id=»agsXMPP_13″><si xmlns=»http://jabber.org/protocol/si»><feature xmlns=»http://jabber.org/protocol/feature-neg»><x xmlns=»jabber:x:data» type=»submit»><field var=»stream-method»><value>http://jabber.org/protocol/bytestreams</value></field></x></feature></si></iq>

SEND: <iq id=»agsXMPP_14″ to=»m31@mym3.org/Spark» type=»set»><query xmlns=»http://jabber.org/protocol/bytestreams» sid=»a790a9d1-a7af-410d-9ea1-022d9744ed6a»><streamhost jid=»proxy.mym3.org» host=»proxy.mym3.org» port=»7777″ /></query></iq>

RECV: <iq xmlns=»jabber:client» type=»result» from=»m31@mym3.org/Spark» to=»robot@mym3.org/MiniClient» id=»agsXMPP_14″><query xmlns=»http://jabber.org/protocol/bytestreams»><streamhost-used jid=»proxy.mym3.org» /></query></iq>

SEND: <iq id=»agsXMPP_15″ to=»proxy.mym3.org» type=»set»><query xmlns=»http://jabber.org/protocol/bytestreams» sid=»a790a9d1-a7af-410d-9ea1-022d9744ed6a»><activate>m31@mym3.org/Spark</activate></query></iq>

RECV: <iq xmlns=»jabber:client» type=»result» from=»proxy.mym3.org» to=»robot@mym3.org/MiniClient» id=»agsXMPP_15″ />

I’m confused with this problem whole days, can anyone point me?   :-)  
It’s will be much appreciated if someone helps.  :nuts:
Thanks,
Lynn


This post was edited on 2016-04-26, 10:39 by lynnjeans.

Member since Oct 2015 · 6 posts
Group memberships: Members


can somebody help? :'(

Member since Feb 2003 · 4447 posts · Location: Germany
Group memberships: Administrators, Members


Don’t see error message in your XML. I don’t see why it fails or where it stops.
I would suggest that you debug the file transfer code to get more information.

Alex

Member since Oct 2015 · 6 posts
Group memberships: Members


thanks you for reply, Alex,

Yes, like you saw, no error occured,
I have  debug the file transfer code, found nothing about this problem, and I did’t modify any source code of the MiniClient Project,
I just download the sample, run it.

Here is the Spark latest version download link:
http://www.igniterealtime.org/downloads/download…?file=s…

Could you please have a little test on this issue?  :huh:
Thanks in advance.

Lynn

Member since Feb 2003 · 4447 posts · Location: Germany
Group memberships: Administrators, Members

Quote by lynnjeans:

Could you please have a little test on this issue?  :huh:

I am extremely busy right now and have no time in the next days for debugging this.
Please compare the logs with a transfer which works and with a transfer which does not work. I assume that one client does not send an expected packet which forces it to stop.

Alex

Member since Oct 2015 · 6 posts
Group memberships: Members


Hello  Alex sir, good day!

After I compared the debug xml,
I found the difference:
Spark send file to MiniClient: Accept -> MiniClient Send «stream-method» Iq ->  Spark reply  «disco#info» query (this is different to MiniClient, MiniClient do not query this, but Spark do) -> MiniClient Send DISCO_INFO, DISCO_ITEMS, MUC features -> Spark stopped response.

I guess Spark did’t receive a respected disco info,
so, I try add an xtra disco info response to Spark,
In the SetDiscoInfo Method(frmLogin.cs),  I add feature : agsXMPP.Uri.BYTESTREAMS:

        private void SetDiscoInfo()
        {
            _connection.DiscoInfo.AddIdentity(new DiscoIdentity(«pc», «M3Client», «client»));

            _connection.DiscoInfo.AddFeature(new DiscoFeature(agsXMPP.Uri.DISCO_INFO));
            _connection.DiscoInfo.AddFeature(new DiscoFeature(agsXMPP.Uri.DISCO_ITEMS));
            _connection.DiscoInfo.AddFeature(new DiscoFeature(agsXMPP.Uri.MUC));
            _connection.DiscoInfo.AddFeature(new DiscoFeature(agsXMPP.Uri.BYTESTREAMS));// code added
        }

final xml:

<iq to=»m31@mym3.org/Android» id=»9m5PT-110″ type=»result»>
    <query xmlns=»http://jabber.org/protocol/disco#info»>
        <identity type=»pc» name=»M3Client» category=»client» />
        <feature var=»http://jabber.org/protocol/disco#info» />
        <feature var=»http://jabber.org/protocol/disco#items» />
        <feature var=»http://jabber.org/protocol/muc» />
        <feature var=»http://jabber.org/protocol/bytestreams» />
    </query>
</iq>

The file transfer works properly, but i do not understand the mechanism,
why Spark need to query the disco info, and why we have to add BYTESTREAMS Feature, Please hint.   :-)

Thanks in advance.

Lynn

Member since Feb 2003 · 4447 posts · Location: Germany
Group memberships: Administrators, Members

Quote by lynnjeans:

The file transfer works properly, but i do not understand the mechanism,
why Spark need to query the disco info, and why we have to add BYTESTREAMS Feature, Please hint.   :-)

Service discovery is a protocol to automatically discover features of other entities. So when you don’t run in a closed environment where you know that everyone is running the same software and what its feature are, you can send disco request to other client and ask which features they support.

So Spark is asking which features you support, if you don’t tell Spark that you support XEP-0065: SOCKS5 Bytestreams, which is used for this file-transfer, then Spark does not try to send you the file. Because it assumes you don’t support it.

But when Spark knows that you don’t support it, then Spark should not allow you to start a file transfer at all. It should notify you that the software is not compatible.

To find more about Service Discovery yo can read the extension protocol here:
XEP-0030: Service Discovery

Alex

Member since Oct 2015 · 6 posts
Group memberships: Members


Understood,

Thank you, Alex.

But when Spark knows that you don’t support it, then Spark should not allow you to start a file transfer at all. It should notify you that the software is not compatible.

Yes, Spark should notify me don’t sopport, or i don’t know what’s going wrong.  :-)

Lynn

Понравилась статья? Поделить с друзьями:
  • Sp9031 ошибка fanuc фрезерный станок
  • Spacedesk ошибка подключения
  • Sp9031 sspa 31 ошибка fanuc
  • Spacedesk код ошибки 2 5
  • Sp9021 ошибка полярн поз датчика