Greetings to those of you from HackerNews: Show HN 🙂
Here is how it works:
Send a single URL and we visually compare it to the last time you sent us that url.
Send two URL’s and we visually compare them to each other.
If the URL’s are web pages, we generate a screenshot to compare against your prior screenshot. Screenshots are sourced via URL2PNG (we really, really like url2png for rendering consistency). But if the URL’s are images already, we use those directly instead.
This way we can watch any public page, or private pages generated through Travis/Continuous Integration. We can also watch things that aren’t pages, like mobile/desktop apps or the results of process generated media.
Our broad goal is to bring change aware remembrance agents into existence in a major way. Starting with the web sites and images we “visit” each and every day. Or those “visited” by agents on our behalf.
Browsers have recently started using this visual approach for bookmarks and start pages, but really this should be pervasive and engrained in our expectations as consumers.
Here is how it works in practice:
Each account has its own queue, a number of dedicated workers. Upon accepting a job, we write an initial result file, and 302 redirect you to it. Once a worker completes the job, that final URL will contain all the job results and links to visual assets. All jobs are served out of a fastly.com CDN. Events for an account are streamed out over a dedicated pusher.com channel.
We handle scaling and resizing annoyances, and use a tweaked perceptual diff method to only display the most interesting changes. Those changes are served up as a single composition image or as a montage image (before, diff, after). Plus we include thumbnails to make your dashboard/tooling work quickly.
In fact, our optional dashboard is an all-client-side reactJS app that consumes json from the api. It is meant to show the power of the raw service and be easy to use, but we fully expect custom dashboards to appear.
Here is why we are doing this:
During our initial build out we spent months watching the Alexa top 1000. We injected the service into CMS and commerce systems, so that they became self documenting and source of change awareness. Think archive.org in real time, that you control.
There are a lot of services starting to enter this space, and that is really, really exciting. We like our approach for the flexibility and ease of use. We look forward to seeing how others go after the challenge. This, or something like it, is our future.
So, in closing…
When you remember everything and use agents to learn anything, when you can act upon the slightest change, with a history of observed contextual changes… what would you do better? what would you watch? with whom would you share?
We intend to help you find out. Be change aware. 🙂