Table of contents
    No headers



  • How much error is involved in projection?
  • How many projection cycles are "safe"?
  • Which softwares have the lowest error propagation rate?
  • Are the answers different for raster and vector data?
  • For example, one of our data providers' native storage format is UTM. However they only distribute data in geographic decimal degrees. So when they send data to us they project it first. Now of course when we receive the data the first thing we do is reproject it into Albers to match the rest of our data.

    So before we do any analysis at all the data has been projected twice to three different coordinate systems. Chances are that when our upstream provider was compiling the data, they started with data from an existing source, which was likely in a different coordinate system.

    We do something with the data which results in a new theme. Other people are interested in the data so we project to a popular coordinate system for distribution, which is probably decimal degrees again. The downstream users' first item on the agenda will likely be to project to match their favourite projection. And on we go.

    Over the course of time this cycle will get repeated many times by many people with various software. At what point, if ever, should we be concerned?

    I posted this message to a variety of discussion forums. The results will be summarised and posted here. Should you wish to throw your 2 bits in, you can do so in any of those forums, or here on this page (see the Edit link at the bottom of the page). -- MattWilkie - 19 Mar 2004

    Other places this discussion might be happening:



    Matt...this is actually something that I've given considerable thought to (More questioning than searching for answers) - What is "right"? What makes something "correct"? What's true vs. untrue? It's just lead to a general understanding that nothing is really right (a somewhat nihilistic view I suppose). The real point isn't how accurate is this layer or this one, it's how accurate are these in relation to each other. Notionally, you could tie this to the real world, but at best, this is difficult. Considering that everything is wrong to one degree or another, the essential question for me is: is what I just produced an "accurate" depiction of the real world phenomenom that I've been told to analyze. Of course, this leads to projections and what they do and mean.

    For me, it is always a matter of how the "transformed data" relates to the basedata - eg. is the representation that we producing actually any less "true" do to what we've done to it. The answer revolves around several points: a) the precision of what you are mapping of course. For us in the wildlife field, where at best our precision will be in the metres, the transformation probably doesn't alter the "image" significantly. Thus, marginal errors are tolerable. If you are dealing with survey grade data, then it becomes an issue. (...*For surveyors*...) b) the scale of your presentation [for mapping] c) does the error effect analysis? d) Scale jamming (precision of 1:1 million is uninportant if you are comparing it to 1:10k data) e) Yada yada


    I will say that Bill Huber's message made my head swim. So with that math limitation in mind, here's what I have to say:


    The actual mathmatical error between forward and backward transformations (from one projection to another) of points is very low, and is most likely more dependant on the number of decimal places that is carried. This is actually something that I know and have tested the error in a DD-->UTM-->DD transformation was around 0.0001 ( @ 69° this is about .6m in the x direction - this was done with a calculation that I created [remember that bit about how math hurts my head sometimes!? that should be a bit of a warning] - I would assume that Arc has a more efficent method and has a lower error). Using this value, we could sustain a sizable number of reprojections (10? 15?) before the data begins to impact the "validity" of what we produce (even with regard to processed analysis).

    Conclusion: I know this is rehashing a lot of what's on the ESRI thread, but in a dumbed down manner - the real answer is since we produce "maps" on NTS sheets with error in the hundred of metres and satellite imagery with error in the ten metre range, a metre here and a metre there on our part doesn't really mean at this point in time. The final fact that determines how accurate we have to be is that the majority of maps/analysis that we produce are small scale (1:50k would be the lowest) the "accuracy" isn't all too important. More important is what I would call inutitive relativeness - if it looks and feels right, then it's probably not wrong, and that's usually good enough (Similiar to the duck identification key).


    As Bill Huber said in the esri thread, raster images are a different issue altogether. With vector data, it's just applying a formula to convert a point to different coordinates - theoretically, applying the formula in the other way should take you back to the original numbers. When you reproject a raster, you have a choice - you can either preserve the data (nearest neighbour), or preserve the spatial nature (cubic convolution). So, even if you have 1 reprojection with cubic convolution, you cannot return to the "original" data. A good example of this is when you geocorrect an image - when you apply a [polynomial] transformation, the DN's at the new coordinates are created by intrepeting the cells around the old coordinates. If you were to apply a backwards transformation, you would see numbers that approximate the originals, but aren't exactly the same becuase of the interpolation. I believe that the same is true with a nearest neighbour [I would have to think some on it before I say for sure] - I don't think you can go backwards to the original data, since you've got to sacrifice something - some centroids are probably canned since they have neighbours that fall closer to the transformed centroid.

    Cheers, Slacker

    -- ToddSlack - 22 Mar 2004




    a response by email:


    MapInfo? uses integer math and is therefore unreliable. The latest issue of ESRI software, as I understand from ESRI reports, is geodetically correct in most cases (not all).

    You should be fine (with ESRI software) for Alaskan applications, not so for the Oblique Mercator used for the island of Madagascar - a quite different projection from that used for the Panhandle of Alaska Zone 1.

    For further info on ESRI and this topic, the staff geodesist is Ms. Melita Kennedy at - please say "hello" for me.


    Clifford J. Mugnier (
    Chief of Geodesy and Associate Director,
    CENTER FOR GEOINFORMATICS, epartment of Civil Engineering
    Voice and Facsimile: (225) 578-8536
    Pager: 1-(888) 365-5180






    From: Steve Bugo (ESRI Canada)
    Sent: March 16, 2004 12:32 PM
    Subject: Incident #53911 - Cyclic projecting and error propagation

    Hi Matt,

    The questions you are asking are not the types of questions we usually deal with in Technical Support because they are not specifically related to any ESRI software. However I have tried to put together as much information as I could to try and answer your questions.

    The types of questions you are inquiring about regarding projections are not easily answered. The answers would require a detailed knowledge about many factors including the precision of the original data (level of error that it was created with), the type of projection used on the data, the projection engine and algorithms used to project the data. I have attempted to answer your questions below but you must realize that there are no fast answers. There are too many variables involved that make it very difficult to calculate error levels in projections and how many projection cycles are safe. I have also included a number of links to articles, some of which you may have already seen. Hopefully they will help you in determining what steps to take for your projection requirements.

    How much error is involved in projection? This is a very subjective question because depending on the projection you use for a dataset, for a particular location on the globe, the amount of error can increase significantly. The error for certain localized projection types will increase as you move from the central location where that projection is most accurate to the point where that projection type is no longer useful. What you must also take into consideration is what are you trying to preserve (area, distance, direction, shape). These factors (among others) when combined with the original level of error of the data, when it was created, form a complex formula that makes it difficult to determine how much error is involved for a projection.

    An article that discusses some of the complexities of error in regards to scale and elevation distortion is '"Modified to Ground" Mapping Projections' (,2338,14864,00.html ). Another document that deals with the complexities of calculating projections is "Map projections & Coordinate Systems" (


    How many projection cycles are "safe"? Again this is very subjective and virtually impossible to calculate depending on all of the factors listed above. If you can control every single cycle right from the creation of the data then you may be able to calculate some type of error for the resultant data after each cycle, but this will almost never happen. Therefore there is no set number of projection cycles that can be assumed to be safe. Because you can not always control the state of the data you receive, it is impossible to know the level of error after you in turn project or unproject and reproject the data.


    Which softwares have the lowest error propagation rate? This is entirely reliant on the type of projection engine that is being used and what types of algorithms are used. The projection engine used by ESRI software is a modified version of the European Petroleum Survey Group model. I can not comment on the accuracy of other software vendors but the link below and subsequent links within will give you a explanation of the projection methods employed by ESRI.

    FAQ: What is the Projection Engine?


    Are the answers different for raster and vector data? There are many factors involved with accurately projecting raster data but this mainly depends on the compression scheme being used to compress the images. By projecting and unprojecting raster data through multiple iterations you introduce larger and larger errors into the data. It is always best to keep the original raster on hand to use as a base. If you project the data to the desired coordinate system before applying the compression you will minimize the loss of data quality. Below are some good links to articles for what you should keep in mind for rasters.

    FAQ: Can on-the-fly projection of a raster be avoided by physically reprojecting the data into a different coordinate system?

    Index: Technical articles concerning projecting raster data in ArcGIS


    General Information HowTo: Projection Basics: What the GIS professional needs to know

    FAQ: Where can I find more information about coordinate systems, map projections, and datums?

    ArcGIS Desktop Help > Contents tab > Map projections:

    • Geographic coordinate systems
    • Projected coordinate systems
    • Geographic transformations
    • Supported map projections

    Hopefully this answers some of your questions in part and gives you enough information that you can adequately research the rest for yourself.

    Thanks for the call,

    Steven Bugo - ESRI Canada Ltd.
    Interim Manager, Client Support
    Developer Resource Consultant
    Email : Web:
    Technical Support : GTA~416.441.0337 Toll Free~1.877.441.0337

    29/03/2004 11:51 AM

    Hi Steve,

    > The questions you are asking are not the types of questions we usually deal with in Technical Support because they are not specifically related to any ESRI software.

    This is incorrect. The questions are not restricted to ESRI software, but they are definately specifically related to it.

    > However I have tried to put together as much information as I could
    > to try and answer your questions.

    Thank you. I am going through the info you provided.


    > The types of questions you are inquiring about regarding projections are not easily answered. The answers would require a detailed knowledge about many factors including the precision of the original data (level of error that it was created with), the type of projection used on the data, the projection engine and algorithms used to project the data.

    Okay, I see now that I generalized too much. How about we bring it down to a specific set of scenario with known variables.

    Scenario 1:

    Source data starts it's journey in UTM Nad83, coordinates are stored double precision ArcInfo coverage. It is projected to Lat-Long Nad83 shapefile. feature class is polygons. Nominal scale is 1:50,000.

    Next step is to convert the shapefile to a double precision coverage and project to Albers.

    Next step is to project to geographic decimal degrees and convert the coverage to a shapefile.

    Final step in this round of the cycle is to convert the shape back into a UTM Nad83 double precision coverage.

    ArcInfo Workstation 8.3 is used for all steps.


  • Is there any drift in the coordinates for individual nodes? To what decimal place?
  • After three cycles? Five? Ten? Twenty? one hundred?
  • Is there any difference if the data is single precision?
  • How about if the intermediate format is a geodatabase?
  • What if ArcMap/Catalog are used instead of Arc: command prompt?
  • Scenario 2:

    As above, but data source is a continuous grid, a DEM, with 15m cells. Resampling mechanism is always bilinear.


  • As above, plus:
  • What if resampling mechanism is cubic?
  • What if the cell size is larger? Smaller?
  • Scenario 3:

    As #2, except grid is classified data. Some classes can be a single cell in extent. Resampling is always nearest neighbour.


  • As with #2.
  • thanks for your time,

    -- matt wilkie

    _Note: A follow response has not yet been received._ Chris North from ESRI Canada phoned me this afternoon. He is following up with Redlands.



    Hello Matt, OK, apologies for the delay; we are in the middle of testing for our forthcoming release of FME 2004 ICE and things are a little hectic. Not to mention that I'm not personally an absolute expert in this field, however here goes...

    I've canvassed a lot of opinion and gotten a lot of different answers. I think a lot of what you are asking must depend on the application you are using the data for.

    Also, what you are going to be measuring accuracy against? If you measure accuracy against the original dataset then your data is never going to be MORE accurate than it was; the only way is down when you are doing data reprojections.

    If you are measuring this in terms of datum, then sure a shift to a more relevant datum (say NAD83 instead of WGS84) is going to give you more appropriate results, but still the accuracy of the data will be less than you started with. With datum shifts the most accurate method of conversion is a grid-based method. I believe the Canadian system (NTv2) is a multiple-grid method, so your accuracy is going to vary depending where your data is - if it falls within a higher-density area it will be more accurate.

    Projections are less of an issue than datum shifts, because there are well defined parameters that specify each projection and rigourous mathematical formulae that convert spherical (Lat/Long) coordinates to a flat projection. I believe that there will still be accuracy issues and that these are more obvious when changing the 'type' of projection. For example, if you had a Zenithal (planar) projection and merely reprojected the perspective from Gnomic to Orthographic then you should be able to project backwards and forwards without loss of accuracy. Convert Planar to Conic and I am less certain.

    The extent to which degredation of accuracy occurs is related to precision. If linear scale (for example) is compressed by reprojecting then reversing the process will not necessarily give you the same precision as you had before. And because scale distortion isn't always constant (even an equidistant projection is only equidistant ON the meridians) the degree of error will vary; worse the further from the true meridian. This is where the software package matters, as different packages have different ideas of precision, and will round numbers in different ways. The less the precision the more your data will suffer. Do they use single or double precision? If you are reprojecting or datum shifting you definitely want double-precision to keep whatever accuracy you started with. Also, beware that your software is carrying out the full process. I'm told that some applications carry out an on-the-fly reprojection of data (usually raster) which for purposes of speed produces only an approximate result.

    I think the main source of error in reprojections is human error, and I include here a lack of understanding about software and how it functions. It's all too easy to get something wrong when you are reprojecting data, especially when you need to select the correct projection, spheroid and datum and don't know the difference. This is when it is important to keep metadata showing the source data spec and the processes it has undergone to reach the stage it has; that way any problems can be traced back to a particular action or process. If a user converts to the wrong datum, doesn't notice and doesn't leave a paper-trail then the effort in investigating can be considerable.

    Your data providers probably use UTM because this is the best projection for edge matching data. There might be other reasons, but the fact that they only provide data in decimal degrees shows to me that they are thinking about potential sources of human error and are trying to reduce them by being consistent.

    And as to your question, at what point should you be concerned? At every point! I think it would be foolish to undertake ANY transformation without carefully thinking of the reasons why you are doing it. The less processes data is subjected to the less chance there is of introducing error of any sort.


    Well, I hope this helps. As I said it depends on what you are using your data for; how accurate does it need to be and is it going to be compared with data that has come from a different process? How large an area is it and where is the data in relation to the standard parallels and meridians?

    I've made this fairly generic; if you have specific questions about how FME functions then let me know and I will find out the info from our developers, Regards,


    Mark Ireland, Product Support Engineer
    Safe Software Inc. Surrey, BC, CANADA
    phone: (604) 501-9985 fax: (604) 501-9965
    Solutions for Spatial Data Translation, Distribution and Access

    Hi Matt,

    >> Yeah. I think I cast the net too wide.
    >> From the reponses so far it seems that vector data (re)projecting is
    >> relatively error free -- providing the appropriate software and
    >> parametes are chosen. There is no substitute for domain knowledge and
    >> experience. I would still like to know how many cycles a given piece
    >> of data can go through before the shift is significant, i.e. becomes
    >> larger than the accuracy of the data itself.
    >> I was hoping somebody would get back to me with something like
    >> "taking dataset XXX and using program YYY with parameters ZZZ after
    >> NNN projection cycles error becomes significant; coordinates shift
    >> up 111 units".

    Yes, and I would've liked to have given you that. But it really does depend on the size of your data, the projection chosen, where in the world you are and where your data is in relation to the true meridians.

    If you go from lat/long to a TM projection then further you are from the true meridian(s) the more scale factor is applied, so the deformation of the data isn't constant. You would know the maximum change, and this might be enough.

    If you then convert this to a non-conformal projection you'll start to get angular or shape deformation; how such deformations accumulate and interact I wouldn't like to speculate. I think you would have to take the approach that each mix of transformations produces unique shifts and calculate a maximum error on that basis.

    I couldn't really find a lot of literature on map re-projections - most books simply stick to the properties of individual projections. Ones that I could recommend are the USGS "Album of Map Projections" (USGS Professional Paper 1453) which illustrates individual projections and the Tissot Indicatrix for each, or the "Map Projections Used by the USGS" (USGS Bulletin 1532) which has more of the mathematics and formula if that is of interest.

    As for raster data, well we stick mostly to vector data at Safe, so I have little experience of this. From anecdotal evidence I've heard it is indeed less accurate than vector when reprojected.


    Mark Ireland, Product Support Engineer
    Safe Software Inc. Surrey, BC, CANADA





        Send feedback