Humane, effective, maintenance-free electronic Deer Gard irritates deer through high-frequency soundwaves that are silent to most humans. An easy solution to keep property free of deer! Set it to continuous use or motion-activated mode and keep pesky deer away. Protect gardens and lawns from deer droppings and mess.
The following is an except from Chris Noessel’s new book, Designing Agentive Technology, published by Rosenfeld Media.
One of the fun things we get to consider when dealing with artificial intelligence is that in order to enable it to carry out its seeing, thinking, and doing duties, we must include cutting-edge technologies in the system. Trying to list these authoritatively is something of a fool’s errand, because by the time the book is published, some will have already fallen out of use or become unremarkable, and there will be some new ones to consider. Also, I wouldn’t pretend to have collected a complete list. But by understanding them in terms of seeing, thinking, and doing, we can more quickly understand their purpose for an agent, and thereby the user. We can also begin to think in terms of these building blocks when designing agentive technologies—to have them in our backpack. We can also have a frame for contextualizing future technologies as they become available. So, at the risk of providing too cursory of lists, I’ve built the following based on existing APIs, notably IBM’s Watson and a bit of Microsoft’s Cognitive Services.
The agent needs to be able to sense everything it needs in order to perform its job at least as well as the user, and in many cases, in ways the user can’t sense. While many of these sensing technologies seem simple and unremarkable for a human, teaching a computer to do these things is a remarkable achievement in and of itself, and very useful to equip agents to do their jobs.
Much of this list of sensing technologies feels intuitive and what a human might call “direct.” For instance, you can observe a transcript of what I just told my phone, and point to the keywords by which it understood that I wanted it to set a 9-minute timer. In fact, it’s not direct at all; it’s a horribly complicated ordeal to get to that transcript, but it feels so easy to us that we think of it as direct.
But there is vastly more data that can be inferred from direct data. For instance, most people balk at the notion that the government has access to actual recordings of their telephone conversations, but much less so about their phone’s metadata, that is, the numbers that were called, what order they were called, and how long the conversations were.
Yet in 2016, John C. Mitchell and Jonathan Mayer of Stanford University published a study titled “Evaluating the privacy properties of telephone metadata.” In it they wrote that narrow AI software that analyzed test subjects’ phone metadata, and using some smart heuristics, was able to determine some deeply personal things about them, such as that one was likely suffering from cardiac arrhythmia, and that another owned a semi-automatic rifle. The personal can be inferred from the impersonal.
Similarly, it’s fairly common practice for web pages to watch what you’re doing, where you’ve come from, and what it knows you’ve done in the past to break users into demographic and psychographic segments. If you’ve liked a company in the past and visit their site straight from an advertising link that’s gone viral, you’re in a different bucket than the person who goes to their page after having gone to Consumer Reports, and the site adjusts itself accordingly.
These two examples show that in addition to whatever data we could get from direct sensing technologies, we can expect much, much more data from inference engines.
Though largely the domain of artificial intelligence engineering, it’s interesting to know what goes into the sophisticated processing of artificial intelligence. To a lesser extent, these can inform design of these systems, although collaboration with developers actually working on the agentive system is the best way to understand real-world capabilities and constraints.
We’re about to look at a collection of use cases to consider when designing agentive technology in the next chapters. Please note that I tried to be comprehensive, which means there are a lot. But your agent may only need a few, or not even any. Consider one of my favorite examples, the Garden Defense Electronic Owl. It has a switch to turn it on, and thereafter it turns its scary owl face toward detected motion and hoots. That’s all it does, and all it needs to do. If you’re building something that simple, you won’t need to study setup patterns or worry how it might hand its responsibilities off to a human collaborator.
Simpler agents may involve a handful of these patterns, and highly sophisticated, mission-critical agents may involve all of these and more. It is up to you to understand which one of these use cases applies to your particular agent.
Keep those pests out of your garden with the Easy Gardener 8011 Garden Defense Owl. This unique garden pest control device uses a wind-activated spinning and bobbling head to frighten away birds and insects. The action garden owl is easy to install and looks like a Great Horned Owl.
This simple device is an ideal alternative to chemicals and pesticides. Easy Gardener 8011 Garden Defense Action Owl:. Action owl.
Call of roma game. Uses a wind-activated spinning and bobbling head to frighten away garden pests. The easy gardener owl is lightweight and multifunctional. Easy to install and can be used year after year. Realistic looking, just like a Great Horned Owl.
Chemical-free pest control. Easy to install. 8' x 7' x 16'Warnings:proposition 65 reasons:titanium dioxide, other chemicalsWARNING: This product contains chemicals known to the State of California to cause cancer and/or birth defects or other reproductive harm Explore this item.