Exporting a Report Directly to FTP

/, Reporting/Exporting a Report Directly to FTP

Exporting a Report Directly to FTP

Scheduled Data Extract

Interested in the Crossfuze using the ‘Scheduled Data Extract’ solution in your ServiceNow instance? New and improved version includes options to:

  • Export only deltas from a previous export
  • Custom data filters
  • Extract data from very large tables using chunking.
  • Custom delimiters
  • Fully-tested against all ServiceNow releases!
  • And much more!

Contact Crossfuze for additional information on how you can leverage the power of this enterprise-grade product.


ServiceNow provides numerous ways of getting data into the system. The import set functionality supports pulling data in from an attachment, FTP(S) servers, JDBC data sources, etc. However, a common problem that many customers have is trying to get data out of their SN instance and into their data center. The simplest way to handle this currently is by having a report emailed to you on a scheduled basis. There has to be a better way!

Scheduled Data Extract ModulesThe good news is that there is a better way. The ‘Scheduled Data Extract’ update set has been designed to allow a user to have reports, tables, or table views sent to a specific directory on a local mid server or FTP to a customer’s FTP/FTPS server on a scheduled basis. The ‘Scheduled Data Extract’ update set has been battle-tested and is used by hundreds of large ServiceNow customers today.

On installation of the Data Extract update set, two new modules will be present within the Reports Application: ‘Scheduled Data Extract’ and ‘Data Extract History’.



How it works


Scenario #1:

Let’s say you want to have all of your users exported from ServiceNow to a particular computer in your corporate intranet. In this case you want it to end up on a generic Windows XP machine in a certain directory everyday at 12 AM. Once you’ve installed the scheduled data extract, you simply fill out the form in the following manner.

Scheduled Data Extract MID Server
Most of these fields should be self-explanatory. The MID Server field asks which mid server you would like to have handle this request (Note that the ‘MID Server’ field will not be visible until you have installed the MID Server plugin as documented below).  In this scenario, this field should be set to the mid server that’s running on the machine that will be receiving the export from ServiceNow. The Append timestamp field should be checked if you do not want to overwrite the file on your Windows XP machine each time ServiceNow exports its user table.

Scenario #2:

You want to export an incident report to an FTP server every Wednesday at 1 PM. This report should be in CSV format. Since the incident table contains so many fields, you want to limit the export to only include the fields that you’re interested in. This screenshot represents the solution. Notice the View field. This allows you to specify which view you want to use. It is by defining the view that you’re able to specify which fields should be exported.

Scheduled Data Extract FTP
Notice that the MID Server field is still a mandatory field. This is because the mid server is what handles the grunt work to get the data from ServiceNow and FTPs it to your FTP/FTPS server. The big advantage of this is that you do not need to open/modify ports to allow ServiceNow to send data to your FTP server. It can be a local FTP server that you use internally but do not want to open up to the outside world. The mid server is required to actually perform the transport of the exported file to your FTP location.

The ‘Scheduled Data Extract’ update set supports several secure FTP methods. One method that is not supported is SFTP, which commonly confused with FTPS.

Although it is possible to use this export functionality to export large numbers of records (500K+) from your ServiceNow instance, you obviously should not do so without doing some performance testing first. Any large query against any database has the potential to adversely affect the performance of that database.


Interested in using the Crossfuze ‘Scheduled Data Extract’ solution in your ServiceNow instance? Contact Crossfuze for additional information on how you can leverage the power of this enterprise-grade product. New and improved version includes an option to export only deltas from a previous export, custom filters, and the ability to extract data from very large tables using chunking. It’s also fully-tested against all ServiceNow releases!


By | 2017-02-17T21:37:22-07:00 March 16th, 2010|Categories: Integration, Reporting|Tags: , , |55 Comments

About the Author:

Jacob is the co-founder of Crossfuze, a ServiceNow implementation product and services company. He is one of the foremost ServiceNow integration experts in the world and designed/wrote many of the plugins found in the ServiceNow product today. Prior to co-founding Crossfuze he worked at ServiceNow during its startup years to write the SSO solutions and 3rd-party integrations.


  1. Jacob Andersen December 8, 2011 at 3:43 pm

    Please update your SNFilePoster mid server script include with the latest code to make sure that we’re debugging a true problem. You can find the latest code here: https://www.servicenowguru.com/scheduled-file-poster-code/

    Please update me with your results. Thanks!

    • pmendoza December 13, 2011 at 12:15 pm

      I actually didn’t have the SNFilePoster script in my instance, but added the updated file you provided. I am still getting the error, however. Let me know if you need me to test something else. Thanks!

      • Jacob Andersen December 13, 2011 at 12:33 pm

        Hmm, if you didn’t even have that script in your instance, then I imagine there is a problem altogether with this in your instance. I would suggest reapplying the latest update set. I just added a few things this update set this morning, so please reapply and re-test. If it still does not work, I’d be happy to take a look at your instance.

        • pmendoza December 14, 2011 at 8:21 am

          I applied the new update set, but I am still receiving the error.

          • Jacob Andersen December 14, 2011 at 12:27 pm

            What is your SNC instance? I’ll hop on and take a look. I’ll also delete your post after I get on your instance so that others don’t see.


          • pmendoza December 14, 2011 at 1:34 pm

            fxcmdev, thank you so much!

  2. Jacob Andersen December 15, 2011 at 9:29 pm

    pmendoza, I’m on the road this week and am away from my main computer. I took a quick look at your instance and cannot see right off-hand what the root problem is. I will continue to look at this on Monday.


    • pmendoza December 16, 2011 at 11:57 pm

      I understand, thank you for taking the time in the first place! Have you been able to test the update set with Aspen yet?

  3. brumiou January 6, 2012 at 2:02 am


    I try to export a report in csv to a ftp server.
    The file is created, but contains an Apache error :
    This request requires HTTP authentication ().

    Have you any idea where it comes from?

    thanks a lot!

  4. brumiou January 25, 2012 at 8:24 am


    I still have the issue, and I checked, and as for pmendoza, I don’t have the SNFilePoster in the mid server script includes.

    I add it, and updated the SNDataRetriever, but it doesn”t change anything.

    If the issue was solved for pmendoza, maybe you can explain to me how you did? or what’s the solution?

    Thanks a lot!


    • brumiou January 25, 2012 at 8:47 am

      Sorry to bother you again…

      I think I discovered something…
      first of all I forgot to tell you that we are on premise, so we are not hosted by service-now.
      In the logs, I can see :
      Probe: JavascriptProbe:SNDataRetriever *** Script: report: https://scrdev01.service-now.com/sys_report_template.do?jvar_report_id=51822452ff8210002098dfffffffff95&CSV
      host: https://scrdev01.service-now.com/

      but the adress is not correct!

      Do you know where I can change this?
      it’s not https but http, and it’s not neither zzzz.service-now.com.

      Thanks for your time



      • Jacob Andersen January 26, 2012 at 10:15 am

        You can modify how the instance name is defined by going to the sendDataExtractProbe business rule. The 3rd line in the script contains this code:

        var host = gs.getProperty(“glide.servlet.uri”);

        You can change it to whatever suits your environment. Good luck!

        • brumiou January 27, 2012 at 12:54 am


          thanks for the quick reply!

          I looked at the system property glide.servlet.uri, and it was already set correctly (http://scrdev01.solaris.nbb:16001/). So it seems that the SNDataRetriever use another property?
          Maybe based on the instance name, and then a hardcoded value somewhere?

          So for your last question, yes, I still have the problem 🙁

    • Jacob Andersen January 26, 2012 at 10:16 am

      Do you still have this problem? From your latest post, to looks like you got past that issue. Thanks!

  5. Mark Stanger February 6, 2012 at 10:36 am

    Due to some ServiceNow limitations, the localhost MID server option had to be removed.

  6. Vineeth March 2, 2012 at 5:04 am

    I am using scheduled extract to upload the sys_audit file to a FTP Server. Many times the file which is uploaded is a 0 bytes file with no data. Some times it runs successfully. Is it related some thing coz Sys_Audit contains lot of data?

    • Mark Stanger March 2, 2012 at 5:21 am

      Yes, most likely. It’s not a good idea to try to extract the entire sys_audit table. sys_audit is really designed just to show the history of a record within that record. Due to its size, it’s going to be very slow and inefficient when accessed like this…to the point that it’s possible to bring down an entire instance by doing so. The answer I would give to this question is that it just shouldn’t be done. Instead, create a trend job, metric definition, or business rule to capture the specific, necessary information about incidents and store it in a smaller table that can be accessed (and exported) without bringing your whole instance down.

  7. Vineeth March 2, 2012 at 7:31 am

    I missed to mention one information. The condition which is mentioned is Created Yesterday.And this is the error which is seen in the Input Queue of the Scheduled Data Extract i.e Methiod Failed:
    Initializing SNDataRetriever
    In Mid
    Running SNDataRetriever execute
    Arriving to getReport function
    Method failed: HTTP/1.1 202 Accepted
    Releasing Connection

    • Michele July 20, 2012 at 10:03 am

      Hi, did you figure out why you were getting this or figure out how to get around it?

      We are having the same issue when trying to export 5,000 to 50,000 records from the sys_history_line table. We are not getting this issue with any of the other data exports including the incident, user and task_sla table. The export is being run at row count of 50,000 for incident, user and task_sla.

      The sys_history_line export was being run at 50,000 until we started having issues. Then we set the export to 5,000 rows and that still didn’t work. If we restart the MID Servers (clustered in payload balancing) the sys_history_line works once or twice at 50,000 but then stops working with the above http/1.1 202 Accepted error.


  8. brumiou March 5, 2012 at 7:18 am


    it works very well for us now (I had to hardcode the host name in the sendDataExtractProbe business rule, because the gs.getProperty(“glide.servlet.uri”) returns https://[instancename].service-now.com, and as we are on premise, it’s not .service-now.com).

    I just have a little question.
    Is it possible to put the execution on On Demand and launch it via a Perl script?
    We would like to schedule the creation of the file by another scheduling tool.

    Thanks a lot



  9. Russ Hart March 26, 2012 at 8:20 am

    Please can you help Mark – getting this error..

    Initializing SNDataRetriever
    Running SNDataRetriever execute
    Arriving to getReport function
    Method failed: HTTP/1.1 403 Forbidden
    Exception caught in SNDataRetriever->getReport: Stream closed
    Releasing Connection
    The report was empty?

  10. Niklas May 31, 2012 at 4:41 am

    Hi, I’ve tried to use the update set on an Aspen release instance. When I try to set up Sheduled Data Extract according to “Scenario #1:” some fields are missing. It’s not possible to schedule because the fields “Run” and “Time” are missing. Th ebutton to execute is also missing.

    Anyone recognize this issue?


    • Mark Stanger May 31, 2012 at 4:47 am

      I haven’t heard of anyone else having that issue. You should probably re-download the update set and try to apply it again. If the fields and button still aren’t there then you should try to personalize the form to bring the fields forward.

      • Niklas May 31, 2012 at 6:34 am


        I’ve tried download again and applied it to demo01.service-now.com succesfully.

        When I try to do the same in my instance the module is named “U scheduled data extract” instead of “Shceduled Data Extract” as in the demo instance. Im also missing some relations as I can see in the Schema map. Thats “Scheduled Script Execution”, “Scheduled Job” and “User” thats missing.


  11. Sarah June 15, 2012 at 10:13 am

    This worked like a charm in my DEV environment, but when I commit the update set to my TEST environment several fields are missing on the Schedule Data Extract Form. (Run, Time, mostly the ‘schedule’ fields) which I see are not on this table but rather the sysauto table. What might have caused this?


    • Mark Stanger June 15, 2012 at 10:19 am

      There are some issues with update sets and table extensions in some versions of ServiceNow that caused issues with the application of previous versions of the scheduled data extract. There’s a new version available that corrects these issues. The new version just came out a few days ago. Can you try downloading and applying that update set and see if that solves your problem?

      • Sarah June 15, 2012 at 10:56 am

        Unfortunately that didn’t work either. I committed the 3.1 version and there are fields still missing. The original version was 2.6 if that helps?

  12. Chris York July 2, 2012 at 10:35 am

    You mention in the description that this does not work with SFTP Protocol. Can you elaborate on why? I have a external party that ONLY supports SFTP and I am trying to get a scheduled data extract to them nightly. Is there any way I can enhance this to include the support of SFTP?

    • Jacob Andersen July 2, 2012 at 10:48 am


      The latest version posted in the downloads section does support SFTP. I just updated it about a month ago and will need to update this article to reflect that change.



  13. Jeremy Mandle July 3, 2012 at 8:50 am

    Working as expected saver for the fact that only a 1kb .csv is being written into the directory I specify on one of our MID Servers.
    Any insights?

    • Mark Stanger July 5, 2012 at 10:56 am

      Hey Jeremy,
      The only suggestion I have is to make sure that you’re running the latest version of the Scheduled Data Extract. This has recently been updated so you could try to reapply the update set to your instance and see if that fixes anything.

      • Jeremy Mandle July 23, 2012 at 7:14 am

        Thanks for the suggestion Mark. I’ll let you know if this works.

  14. Steven Bell July 24, 2012 at 11:41 am

    Hey Jeremy:

    I just downloaded installed the latest version (3.1). You seem to indicate that this is possible, but when I implemented your solution it worked great, but stops cold at 50,000 records. I rigged up a reports that grab things in <50K rec chunks and then created scheduled reports for each, but it should be possible to pull it as one large file.

    How can I get past the 50k max record pull?



    • Jacob Andersen July 24, 2012 at 1:26 pm


      The max number of records that can be processed is set as a property and is defaulted to 50k. The property is com.glide.processors.XMLProcessor.max_record_count . This property was put in place so that people didn’t kill the performance of their instances by exporting huge tables. Please understand the side effects when modifying this property.


      • Steven Bell July 24, 2012 at 2:04 pm


        Yes, we understand the impact. This would be done only once-in-awhile, and only in off hours.

        BTW, I tried this setting before and it had no effect. I just attempted it again and still getting only 50K records back.

        Here is what I did:

        1. In our DEV instance went to System Definition / Properties
        2. The property did not exist. Following the suggestion from this article: http://wiki.servicenow.com/index.php?title=Fall_2009_Notable_Changes
        (section 1.24) I created just the one property.
        3. I clicked the new button.
        a. For name I entered: com.glide.processors.XMLProcessor.max_record_count
        b. For type: integer
        c. For Value: 100000
        d. I left all other settings as default, and clicked Insert.
        4. I then went to Reports / CrossFuze Data Extract, brought up my scheduled job (in this case a report of the entire cmdb_ci table), and clicked Execute Now. My target is the default in my MIDServer (C:\ServiceNow\MID Server1\agent\work)
        5. After the job ran, I get a single cmdb_ci.csv file (it comes down quick), of 39Mb of data, but only 50K records.

        Any thought on why the setting does not bring down the first 100K records instead of 50K? Is this the right location for the property?


  15. Michele July 27, 2012 at 6:40 am

    We are still getting the HTTP 1.1/202 Accepted error but only when trying to export data from the sys_history_line table. It is happening whether we try to pull 100 rows or 50,000 rows. I have also tried to limit the query to only a small period of time (like 15 minutes) but that does not change the issue. I have tried adding a retry and timer to the SNDataRetriever job with no change in the issue. I have tried different MID Server configurations (cluster-payload, cluster-failover, single MID server) with no change in the issue. The job also orders by sys_id so that we can grab the first 50,000 rows, then the next 50,000 which works fine for the incident and user tables.

    This job runs fine in DEV where there are less than 50,000 records in the sys_history_line table but in TEST there are over 2 million rows (which is where this is failing consistently). I’m also starting to wonder if the table rotation is corrupted or something.

    • Michele July 27, 2012 at 6:43 am

      More information: Transport method is set to FTPS but I have also tested with the transport option set to MID Server (to eliminate the FTP piece of the puzzle).

      • Jacob Andersen July 27, 2012 at 7:06 am

        I’m not sure why you are seeing that, but it is possible that the sys_history_line table doesn’t allow processor export. Regardless, I would highly discourage exporting that table using this solution.

        • Michele August 31, 2012 at 7:52 am

          I completely understand Jacob, we have been encouraging our client to do something different like ODBC but that is not possible yet for them. However, I have been able to do the export using the sys_created_on field (which has been indexed) and limiting the query to a hour or two of data at a time.

  16. Michele August 2, 2012 at 2:58 pm

    SFTP (Password) transport is giving us an error in our response payload. I don’t see anywhere to post the file and don’t want to include it all in the post as it is long so have provided an extract of it below. Any ideas?

    I re-copied the SNDataRetriever Script Include from the most recent version of your update set back into the instance I’m working in as well.

    Evaluation error: Caused by error in MID Server script include ‘SNDataRetriever’ at line 146

    143: sftpSuccess=false;

    144: }

    145: authState = new Packages.com.sshtools.j2ssh.authentication.AuthenticationProtocolState();

    ==> 146: if(ssh.authenticate(pk)==authState.COMPLETE){

    147: this.log(‘SSH REMOTE AUTHENTICATE==TRUE’);

    148: var sftp = ssh.openSftpClient();

    149: try{

    sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    java.lang.reflect.Method.invoke(Unknown Source)


  17. David Hale August 31, 2012 at 7:12 am

    Is Scheduled Data Extract (Mid Server Plugin) working with a ServiceNow’s MSP implementation?

    • Mark Stanger August 31, 2012 at 7:34 am

      @David, I’m not aware of any reason that it wouldn’t work in an MSP implementation.

    • Jacob Andersen August 31, 2012 at 8:03 am

      Yes, it should work with MSP implementations.

  18. Mike P September 20, 2012 at 2:30 pm

    If I choose an entire table I have no problems and get the header row and all the data. If I choose a report, I get the CSV file but with only the header row and no data rows.

    My instances are running on version June 11, Patch 4. I have Scheduled Data Extract version 2.1 installed. I did update SNDataRetriever to get the user id and password on the mid-server (to fix the June 11 problem).

    I looked at the Data Extract History and it had no errors or messages. If I run the report from View/Run it did produce 2 records. I updated our Test environment to the current version, 3.1 (Crossfuze) hoping that might fix the problem, but I am still having the same problem. After I updated, I created a new extract in the Crossfuze module. I renamed the old SNDataRetriever to avoid conflicts. I am still only getting the header row and no data. And there are still no errors are showing up in the Data Extract History for the extract. I copied the url being passed to the mid server and pasted it in my browser and it brings up the report correctly showing 2 records in the list on the bottom of the page.

    I’m out of ideas and I need to create multiple extracts for a few different reports ASAP. Anyone have any similar situations or ideas?

    Thanks in advance

  19. Jkosmich October 2, 2012 at 12:46 pm

    Hey Mark / Jacob:
    At present I am running Berlin (glide-berlin-05-06-2012__patch1-08-17-2012), the control installed with out any incident!

    Is there a reason that when a Report choice is selected, the “XML” option is omitted? I have a need to send out a Report (constructed of a LIST page of data which is pre-filtered) into a target system that requires XML formatted data.

    Is there some fundamental reason that choice was omitted? I have at present identified the UI Policy that controls this function and could easily patch it but wanted to know of any side effects of performing this heinous act upon my instance.

  20. Jim October 9, 2012 at 9:28 am

    I had an error after uploading the latest update set. I am sure it was specific to my instance, but I wanted to share what I found.
    When I attempted to export to the mid server, I got the error below. Since I am not using FTP, I checked the SNDataRetriever and found the evaluation statement: if (this.transportMethod == “MID Server”).
    When I looked at my transport method, choices “Mid Server” was not enabled AND the case was not correct. I updated my choice list to use “MID Server” instead and it worked perfectly.

    Evaluation error: Caused by error in MID Server script include ‘SNDataRetriever’ at line 109

    106: pt.setProperty(“connection.timeout”, “10000”);
    107: pt.setProperty(“connection.passive”, “true”);
    ==> 109: var connection = Packages.org.ftp4che.FTPConnectionFactory.getInstance(pt);
    111: var fromFile = new Packages.org.ftp4che.util.ftpfile.FTPFile(this.MIDSERVER_FILE_PATH, this.MIDSERVER_FILE_NAME);
    112: var toFile = new Packages.org.ftp4che.util.ftpfile.FTPFile(this.targetPath, this.targetFileName);

  21. Kumar Tella January 3, 2013 at 7:12 am

    This data extract works for us for the reports data export through mid server, except one report so far. I have a report created on incident metric table that retrieves the records updated yesterday (about 4500 records only), but the extract in CSV format to midserver returns a blank CSV file, why? The other reports created on incident and problem tables are just working fine. Any help would be appreciated.

  22. Amado Sierra January 25, 2013 at 12:25 pm

    Great job on this.
    One question, how did you get to extend a system table (sysauto_script)?
    I’m working on a new feature for one of the applications and would like to extend the sysauto_script table, but ServiceNow does not allow it.

  23. Pascal February 4, 2013 at 9:53 am

    Great Update set, is there a way to export all rows of the table to a .csv? Currently it just takes the default settings of the sys_user.list which includes something like 5 columns.

    • Mark Stanger February 4, 2013 at 10:36 am

      CSV and XLS export columns are based off of the view selected. You can personalize a view for the export that contains all columns if you want. You can also set up a list report to extract that lists all of the columns you want.

      • Pascal February 5, 2013 at 12:33 am

        Thank you, I think I’ll do it the way with the list report.

  24. Dan March 4, 2013 at 11:40 am

    Is there a way to change the file permissions on the file created on the mid server at all?

    • Dan March 4, 2013 at 12:54 pm

      So, I figured it out finally, I used the following in the SNDataRetriever script:

      var perms = new Packages.java.io.File(tmpLoc);
      perms.setWritable(true, false);

      So can you call arbitrary java packages within service now.

  25. Benjamin MALARTIC March 5, 2013 at 8:40 am

    Hi Mark, Jacob,

    Thanks for this tool 🙂

    I saw a little thing : the default value of the u_transport_method field is “MID Server” whereas the value of the choice list is “Mid Server”. Your UI Policies are not working by default (the Target Server is still mandatory if you do not change the value of u_transport_method).

    • Mark Stanger March 5, 2013 at 9:44 am

      Thanks Benjamin! This is actually fixed in the latest version of the Scheduled Data Extract, and we’ve added a ton of enhancements as well. If you’re interested in the full product, you can contact us at Crossfuze Solutions for details.

Comments are closed.