In email contacts with me regarding Report #157 with the eel looking evidence, there have been some wondering observations regarding the presence of both some subtle image tampering that viewers have been finding and the presence of the eel looking object in the same rover image. In other words, why would there be image tampering presence and yet leave the eel looking object out of the tampering process to be seen? To some this goes against reason and brings up the question as to whether the eel presence is intentional. Of course that in turn brings up the question of why would secrecy types do that?
This thinking direction suggests some unknown secrecy tactical at work but the explanation is simple and straight forward. From my perspective, image tampering is routine in the planetary science data. After examining many tens of thousands of images, I tend to take it for granted, quickly tune it out, and move on without reporting on the bulk of it. For example, if it successfully hides something and that’s the end of it, then I just move on. You have to know when to hold em and when to fold em and not waste time on lost causes. If I didn’t do that, my reporting would by far be dominated by such tampering evidence and you would be bored to death with it as I long ago came to be. I report on it only when I think it is important for viewers to understand.
Here is what must be understood about image tampering. When a mission returns many thousands of images that in turn must all legally be revealed publicly in a specified timely manner, then applying tampering in images by hand by human personnel is just not remotely practical. The rush to meet public release deadlines would be unacceptably mistake prone. Further, the process would involve too many people and such numbers of people could not be relied upon to keep any secrecy they’ve observed under wraps indefinitely. On the other hand, when mission legalities and time tables allow the secrecy types to take their time and also cherry pick which images to publicly release while not releasing others, then human intervention can prevail as long as it can be kept to a minimum.
When a mission goes to Mars to conduct a massive photographic visual survey of the planet surface as opposed to one of selected camera targets, that will of course produce images in the many tens of thousands and more. A mission that fits that criteria very well would for example be the MGS MOC mission to Mars. Data from such a mission and any tampering to be done in it must be done at very high speed far beyond human personnel capabilities and the only thing that logically fits that requirement is computerization. However, it must be computerization that is capable of taking over the doing of it and learning as it goes with the help of being constantly updated via ongoing human programming. These factors are essential to obfuscation goals and success. In other words, the need dictates the function.
This is where cutting edge artificial intelligence (AI) computerization comes in. After the AI’s initial basic programming, it can gradually learn what its masters are looking to accomplish and then modify itself taking the ball and running with it alone accomplishing in a tiny fraction of the time what a human can do. Remember that there is no printing process here as it all happens within the digital mathematical algorithm world where mechanical limitations do not apply. However, the concept of the AI learning carries within it the concept of making mistakes to learn from, so mistakes are a part of this process as well.
First the tampering AI processes the image by mapping every object within it. Very basically it then makes decisions based on its programming and learning experience criteria as to what is objectionable and what isn’t. That criteria may not be just offending objects like aircraft, buildings, surface water, trees, etc. but also natural geological terrain that might otherwise give viewers a real clue as to size scale. Why? Because manipulating perceived camera resolution is a primary base obfuscation tactical.
In my opinion, tampering applications are often initially done at very high resolutions not admitted to and then the whole is drawn back to a poor resolution, desaturated of color, and that poor result is what is released to you and I as well as the science and academic communities and of course the media. Since this obfuscation tactic has been done since the beginnings of space exploration in what is publicly released and the poor quality is all we’ve ever seen, it becomes our standard of reference and all that we expect from the science data. If you are a secrecy type, that is a good thing. Ignorance and its promotion is always a good thing to secrecy because it keeps population demands and intrusiveness out of what they consider their business.
Remember that this is a visual process even if it is broken down into mathematical algorithms and object recognition is the name of the game. When you think about it, objects that indicate the presence of life are all around us here on Earth and, in most cases, more plentiful visually than the underlying geology itself. These are things we are very familiar with even though we don’t think about them but we quickly recognize them as life. So the AI must be fed many billions of object shapes into its database in order to adequately recognize all such objects as objectionable and worthy of covering up. As you might expect, once you get to thinking about it, that’s a lot of objects and it is nearly impossible to get it right and comprehensive with the initial programming.
So the tampering process with AI’s is a function of initial programming followed up by constantly adding objects and shapes to the object recognition software that the AI must use to tweak its database to increase its effectiveness. That is part of the “learning” process. Some objects are added that were just simply forgotten in the first programming forays and some objects are added when you find that some obstinate researcher comes along and discovers one of the “mistakes” that is then embarrassingly present and verifiable right in your own science data fixed in time and place where you then can’t alter it.
As I began to understand this process early on working in the MGS MOC data, I also began to realized that what I publicly revealed likely guaranteed that such “mistakes” or discoveries would not appear in future data releases. In other words, by my reporting I was telegraphing what the mistakes were as I went and actually helping the secrecy agenda clean up its act by fixing them in data not yet released as well as future missions data to come. Not good!
So, in that early MGS MOC research, I started holding back some of my best evidence discoveries and not publicly revealing them to try and inhibit this. It has worked partially but then they aren’t stupid and can also figure out what shapes to hide on their own. However, my counter measure like this has resulted in some very strong surface water and bio-life evidence held back from the MGS MOC data that will appear in the upcoming book. You’ll see and be able to judge for yourself when it comes out shortly.
In any case, what must be understood is that the tampering is overall very effective. It by far obfuscates the great bulk of the anomalous evidence you and I would be interested in. What I discover and report on is simply where a rare “mistake” has been made here and there in the process of obfuscation, or I should say failing to adequately obfuscate. So the evidence that I reveal tends to be in random bits and pieces and represents only a tiny myopic view of what is likely the total Mars truth. Therefore, it would be a mistake on our part to extrapolate too much from these bits and pieces of evidence insight except in that they often represent the more basic concepts of water and life on Mars that isn’t by official position suppose to be there at all.
Meanwhile, back to the example of the eel looking evidence in my Report #157 and how both this object and image tampering could exist in the same image. It is very likely that the AI just simply didn’t have this eel looking shape in its vast database and so it left it alone while making applications on other adjacent evidence that was in its database. Remember, the AI isn’t a human being capable of a lot of imaginative extrapolation. It was the same with the hollow rocks showing evidence of the passage of something in and out of holes in them. This image also has other suggestive evidence in it that I did not report on initially like the mask and the smooth polished material and the possibility of a second eel.
I’ve also had email feedback that a mixture of life and artificial object evidence like this demonstrates widespread destruction on the planet’s surface. However this is not something that can be adequately extrapolated from this limited rover visual evidence. I realize that the lonely empty distant vistas in the rover and Phoenix imaging tends to encourage this view. However, if you can accept that any of this visual data has been manipulated, then it is logical that these distant views would also be manipulated. This becomes even more likely when it is realized that these distant views are also the easiest to digitally manipulate.
For all we know, the rover in the case of the eel could be examining a waste area nearby a large city or industrial operation that is subject to periodic automatic flushing of wastes via waste water that soon sinks into the soil before the next flush. Such a scenario would of course explain the presence of a menagerie of artificial objects (garbage, mask, etc.) and life (eels) mixed together as flushed debris. In other words, devastation is not the only possible explanation nor is it even the most likely.
The need is always there to be open and objective in one’s assessment of evidence rather than indulge in favored theories that too often lead to prejudice that in turn leads to perception filters and/or blindness.