US20120200743A1 - System to augment a visual data stream based on a combination of geographical and visual information - Google Patents

System to augment a visual data stream based on a combination of geographical and visual information Download PDF

Info

Publication number
US20120200743A1
US20120200743A1 US13/023,463 US201113023463A US2012200743A1 US 20120200743 A1 US20120200743 A1 US 20120200743A1 US 201113023463 A US201113023463 A US 201113023463A US 2012200743 A1 US2012200743 A1 US 2012200743A1
Authority
US
United States
Prior art keywords
information
mobile computing
video
video stream
augment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/023,463
Other versions
US8488011B2 (en
Inventor
Sean Mark Blanchflower
Michael Richard Lynch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ANTONOMY Corp Ltd
Hewlett Packard Development Co LP
Original Assignee
Autonomy Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13/023,463 priority Critical patent/US8488011B2/en
Application filed by Autonomy Corp Ltd filed Critical Autonomy Corp Ltd
Assigned to ANTONOMY CORPORATION LTD reassignment ANTONOMY CORPORATION LTD ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BLANCHFLOWER, SEAN MARK, LYNCH, MICHAEL RICHARD
Priority to PCT/US2012/024063 priority patent/WO2012109182A1/en
Priority to EP12744744.9A priority patent/EP2673766B1/en
Priority to CN201280008162.9A priority patent/CN103635954B/en
Publication of US20120200743A1 publication Critical patent/US20120200743A1/en
Assigned to LONGSAND LIMITED reassignment LONGSAND LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AUTONOMY CORPORATION LIMITED
Priority to US13/940,069 priority patent/US8953054B2/en
Publication of US8488011B2 publication Critical patent/US8488011B2/en
Application granted granted Critical
Assigned to AURASMA LIMITED reassignment AURASMA LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LONGSAND LIMITED
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AURASMA LIMITED
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6581Reference data, e.g. a movie identifier for ordering a movie or a product identifier in a home shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • Embodiments of the present invention generally relate to the field of digital image processing, and in some embodiments, specifically relate to inserting information messages into videos.
  • video capturing devices are available in the market today at very affordable prices. This allows many consumers the ability to capture video for any occasions at any place and any time.
  • the content of the captured video is limited to what is visible to the operator of the video capture device. For example, when the operator is videotaping a building because of its unique architecture, what the operator sees in a viewfinder or on a display of the video capturing device are images of the same building and nothing more.
  • a mobile computing device may be configured to enable a user to capture a video stream and view the video stream after the video stream has been augmented in real time.
  • the mobile computing device equipped with a global positioning system (GPS)
  • GPS global positioning system
  • a server computer coupled with the mobile computing device may be configured to receive visual information about points of interest in the video stream and the geographical information from the mobile computing device.
  • the server computer then identifies augment information and transmits the augment information to the mobile computing device.
  • the augment information may be used to augment the captured video stream to create an augmented video stream, which may be viewed by the user on a display screen of the mobile computing device.
  • FIG. 1 illustrates one example of a mobile computing device that may be used, in accordance with some embodiments.
  • FIG. 2 illustrates an example of a network that may be used to augment a captured video stream, in accordance with some embodiments.
  • FIG. 3A illustrates an example of a server computer that may be used to determine augment information for used with a captured video stream, in accordance with some embodiments.
  • FIG. 3B illustrates an example of user profile information, in accordance with some embodiments.
  • FIG. 4 illustrates an example of a network diagram with mirrored servers that may be used to filter information received from the mobile computing devices, in accordance with some embodiments.
  • FIG. 5 illustrates an example flow diagram of a process that may execute on a mobile computing device to create an augmented video stream, in accordance with some embodiments.
  • FIG. 6A illustrates an example flow diagram of a process that may execute on a server computer to determine augment information, in accordance with some embodiments.
  • FIG. 6B illustrates an example flow diagram of a process that may execute on a server computer to determine augment information based on user profile, in accordance with some embodiments.
  • FIG. 6C illustrates an example flow diagram of a process that may be used to determine distance based on the chirp signals generated by the mobile computing devices, in accordance with some embodiments.
  • FIG. 7 illustrates an example block diagram of some modules of an IDOL server, in accordance with some embodiments.
  • FIG. 8 illustrates an example computer system that may be used to implement an augmented video stream, in accordance with some embodiments.
  • a mobile computing device is configured to augment video streams with augment information received from a server computer connected to a network.
  • the mobile computing system includes a processor, a memory, a built in battery to power the mobile computing device, a built-in video camera, a display screen, and built-in Wi-Fi circuitry to wirelessly communicate with the server computer.
  • the mobile computing device includes a video capturing module coupled with the processor and configured to capture a video stream, a global positioning system (GPS) module coupled with the video capturing module and configured to generate geographical information associated with frames of the video stream to be captured by the video capturing module.
  • GPS global positioning system
  • the mobile computing device also includes a video processing module coupled with the video capturing module and configured to analyze the frames of the video stream and extract features of points of interest included in the frames.
  • the video processing module is also configured to cause transmission of the features of the points of interest and the geographical information to the server computer and to receive the augment information from the server computer.
  • the video processing module is configured to 1) overlay, 2) highlight, or 3) combination of both the points of interests in the frames of the video stream with the augment information to generate an augmented video stream.
  • the augmented video stream is then displayed on a display screen of the mobile computing device.
  • an algorithm may be written in a number of different software programming languages such as C, C++, Java, or other similar languages. Also, an algorithm may be implemented with lines of code in software, configured logic gates in software, or a combination of both. In an embodiment, the logic consists of electronic circuits that follow the rules of Boolean Logic, software that contain patterns of instructions, or any combination of both.
  • the present invention also relates to apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled with a computer system bus.
  • ROMs read-only memories
  • RAMs random access memories
  • EPROMs electrically erasable programmable read-only memories
  • EEPROMs electrically erasable programmable read-only memory
  • magnetic or optical cards such as magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled with a computer system bus.
  • Embodiments of the present invention provide a scalable way of combining two or more data sources including using the visual information to trigger augmentations and the geographical location to allow advanced augmentation of the captured video stream.
  • Information presented by video streams is typically limited to what is visible or audible to the users such as geometric shapes, colors patterns associated with that shape, symbols and other features associated with objects in that video stream. There may be much more in-depth information associated with the scenes in the video streams that is not conveyed to the user.
  • the use of visual information or characteristics information about points of interest or objects alone to augment a video stream may be useful but may not be sufficient or scalable when the volume of visual information or characteristics information is large.
  • the use of geographical information alone may not permit the augmentation of specific objects or views of the scenes in the video stream.
  • the geographical information may allow a rapid recognition or matching to the characteristics of objects that are known and pre-stored in an object database.
  • the geographical information may be provided by a global positioning system (GPS).
  • GPS global positioning system
  • Combining the visual information with the geographical information may reduce the amount of possible points of interest that need to be sorted through by a server computer to identify and recognize known objects and/or persons.
  • the rough geographical information from the GPS reduces the amount of possible points of interest that need to be sorted through as a possible match to known objects in that area.
  • direction information about where a video camera of the mobile computing device is facing when capturing the video stream is also transmitted to the server computer.
  • the direction information may be provided by a built-in compass or direction sensor in the mobile computing device to the server computer along with the features of the points of interest in that frame. All of these assist in reducing the sheer number of potential views to comparing the characteristics information transmitted from the mobile computing device to known objects stored in a database making a scalable and manageable system.
  • FIG. 1 illustrates one example of a mobile computing device that may be used, in accordance with some embodiments.
  • Mobile computing device 100 may include display module 105 , communication module 110 , global positioning system (GPS) module 115 , video capturing module 120 , processor 125 , and memory 130 .
  • the mobile computing device 100 may be, for example, a cellular phone, a laptop, a netbook, a touch pad, or any other similar devices.
  • the mobile computing device 100 cooperates with the network 200 (see FIG. 2 ) to supply augment information to points of interest captured in the frames of a video stream in the mobile computing device 100 based on a combination of geographical and visual information.
  • the mobile computing device 100 includes video processing module 135 on the mobile computing device 100 to assist in the identification of objects captured in each video frame as well as then insert the augment information into the frames of the video stream.
  • the communication module 110 may be used to allow the mobile computing device 100 to be connected to a network such as, for example, the network 200 (see FIG. 2 ).
  • the communication module 110 may be configured to enable the mobile computing device 100 to connect to the network 200 using wireless communication protocol or any other suitable communication protocols.
  • the communication module 110 may include a wireless fidelity (Wi-Fi) module 111 , a Bluetooth module 112 , a broadband module 113 , a short message service (SMS) module 114 , and so on.
  • the communication module 110 may be configured to transmit visual information associated with a video stream from the mobile computing device 100 to one or more server computers connected to the network 200 .
  • the GPS module 115 may be used to enable the user to get directions from one location to another location.
  • the GPS module 115 may also be used to enable generating the geographical information and associating the geographical information with images and frames of video streams. This process is typically referred to as geotagging.
  • the geographical information may be inserted into one or more the frames of the video stream.
  • the geographical information may be inserted and stored with images, video streams, and text messages generated by the mobile computing device 100 .
  • the geographical information may be stored as metadata, and may include latitude and longitude coordinates.
  • the server system for the tagging and augmentation of geographically-specific locations can use a location of a building in an image by using the latitude and longitude coordinates associated or stored with that image and other distinctive features of the building to determine what objects are appearing in a video stream.
  • the video capturing module 120 may be configured to capture images or video streams.
  • the video capturing module 120 may be associated with a video camera 121 and may enable a user to capture the images and/or the video streams.
  • the video capturing module 120 may be associated with a direction sensor 122 to sense the direction that the video camera 121 is pointing to.
  • the video camera 121 may be a built-in video camera.
  • the display module 105 may be configured to display the images and/or the video streams captured by the video capturing module 120 .
  • the display module 105 may be configured to display the images and/or the video streams that have been augmented with the augment information stored in a database in the network.
  • the display module 105 may be associated with a display screen 106 .
  • the memory 130 may include internal memory and expansion memory.
  • the internal memory may include read-only memory (ROM) and random access memory (RAM), and the expansion memory may include flash memory.
  • the memory 130 may be used to store an operating system (OS) and various other applications including, for example, productivity applications, entertainment applications, communication applications, image and/or video processing applications, user interface applications, etc.
  • the processor 125 may be configured to execute instructions associated with the OS, network browsers, and the various applications.
  • Some examples of the OS may include Android from Google, iOS from Apple, Windows Phone from Microsoft, and WebOS from Palm/HP, and so on.
  • the network browsers may be used by the mobile computing device 100 to allow the user to access websites using the network 200 .
  • the mobile computing device 100 may include a video processing module 135 configured to process images and/or video streams captured by the video capturing module 120 .
  • the video processing module 135 may analyze the frames of the captured video stream and identify the objects/points of interest within each frame of the captured video stream. Identifying the points of interest for an object may include breaking the object into geometric shapes and distinctive features. The operations may apply to a set of objects with each object in the set broken down into different geometric shapes and associated distinctive features.
  • the video processing module 135 may use an extraction algorithm to identify the features of the points of interest in a frame and extract those features, along with the geographical information, and other relevant information and transmits that packet of information about that frame up to the server computer (see FIG. 3A ), for each frame being captured by the video camera 121 .
  • the video processing module 135 may generate a pattern of X-Y coordinates of the geometric shapes of the point of interest and the color associated with the shapes.
  • the video processing module 135 may extract the direction information from a compass or direction sensor 122 associated with the video camera 121 to determine the direction that the video camera 121 is facing when capturing the frames in the video stream.
  • the direction information provided by the direction sensor 122 may include north, south, east, west, up, down, and any possible related combinations (e.g., Northwest and up 20 degrees from a horizontal plane, etc.).
  • the pattern of points used for the points of interests, the amount of points used, and the amounts of points of interest may be dependent on the amount of distinct points of interest in the frame. Non-centered or periphery objects in the frame, small objects, and non-distinctive objects can be filtered out by the extraction algorithm, while only bold and distinctive features on the points of interest may be extracted.
  • the video processing module 135 may analyze each captured frame of the video stream.
  • the video processing module 135 may relate patterns from the series of frames to assist in determining what the points/objects of interest are.
  • the video processing module 135 may relate patterns from the series of frames to assist in to enable faster transmission of the features of the points of interest. For some embodiments, no transmission of the features from a particular frame may be necessary if there is no change to the same features that were previously transmitted. For some embodiments, if a current frame includes features that are different from the previous frame, only the difference in the change of features is transmitted.
  • the objects/points of interest may generally be located in the center area of the frames. It may be noted that certain consecutive frames of the captured video stream may have the same object in the center area or at least contained within the series of consecutive frames.
  • the video processing module 135 may analyze these frames to identify the characteristics or visual information of the object. As the video capturing module 135 continues to capture the video stream, it may be possible that the video processing module 135 may identify many different objects.
  • the video processing module 135 may perform basic scene analysis including using optical character recognition (OCR) to extract the distinctive features of the points of interest within the frames of the captured video stream, code them into the small pattern of X-Y coordinates for geometric shape format with associated distinctive color and pattern information for that feature.
  • OCR optical character recognition
  • the video processing module 135 may identify the geographical information of that object and other known distinctive features for that object.
  • the information transmitted by the mobile computing device 100 to the server computer may be in the form of texts.
  • the above operations performed by the video processing module 135 can be used to minimize the size of the file being transmitted to the server and hasten the near real time recognition by the server of the points of interest and near real time transmitting the augment information to the mobile computing device 100 .
  • the video processing module 135 identifies and extracts distinctive features including shapes, dot-to-dot type X-Y coordinates of the shapes, patterns colors, letters numbers, symbols, etc. associated with objects/points of interest in the video frame to minimize the size of the file being transmitted to the server computer and hasten the near real time recognition by the server computer of the points of interest and the near real time transmission of the augment information to the mobile computing device 100 .
  • the augment information is to be overlaid onto the points of interest or highlighted on the points of interest so the user can activate to view and/or hear the augment information overlaid with the captured video stream.
  • the entire images may be transmitted on a continuous basis to the server computer.
  • Other techniques that may be used to reduce the amount of information transmitted between the mobile computing device 100 and the server computer may include transmitting the color images in black and white gray scale, transmitting reduced dots per inch (DPI) images, etc.
  • DPI dots per inch
  • the points of interest in a frame may be related to a person.
  • the video processing module 135 may be configured to analyze the frames of the captured video stream and identify facial characteristics or visual information of a person that may be in the center area of the frames. As the video processing module 135 analyzes the many frames of the captured video stream, it is possible that the video processing module 135 may identify many different persons.
  • the video processing module 135 may include a compress-decompress (codec) module 136 .
  • the codec 136 may compress the captured video stream into a DivX format.
  • DivX is a video compression technology developed by DivX, LLC of San Diego, Calif.
  • the DivX format may enable users to quickly play and create high-quality video streams.
  • DivX codec is a popular Moving Picture Experts Group-4 (MPEG-4) based codec because of its quality, speed and efficiency.
  • MPEG-4 Moving Picture Experts Group-4
  • the codec 136 may enable the captured video streams and/or the identified features or characteristics information of the objects/points of interest to be quickly transmitted to a server computer where the communication bandwidth may be limited (e.g., wireless communication).
  • a conversion may be performed to convert the image or the captured video stream from color to black and white to reduce the size of the information to be transferred.
  • the mobile computing device 100 may detect and determine a spatially-accurate location of one or more mobile computing devices using audio and/or visual information.
  • the mobile computing device 100 may include an audio processing module 140 to process audio information.
  • the audio processing module 140 may include a chirp signal generating module 141 and speakers 142 .
  • the chirp signal generating module 141 may be configured to transmit chirp signals in a certain frequency pattern (e.g., high frequency noise, low frequency noise).
  • the chirp signals may be transmitted by the mobile computing device 100 and received by another mobile computing device located nearby. A time gap between when the chirp signal is transmitted and when it is received may be used to estimate how far the two mobile computing devices are from one another.
  • a first mobile computing device in this example may transmit its own chirp signals and may receive the chirp signals transmitted by a second mobile computing device.
  • the difference in the high and low frequency signals may be used to determine the distance traveled by the chirp from the first (or sending) mobile computing device and the second (or receiving) mobile computing device.
  • a mobile computing device may transmit a time-stamped notification to the server computer 300 to indicate that a chirp signal has been transmitted.
  • Another mobile computing device may transmit a time-stamped notification to the server computer 300 to indicate that a chirp signal has been received or detected.
  • the server computer 300 calculates the distance between the two mobile computing devices based on the time difference between the transmitting notification and the receiving notification. For some embodiments, the transmission and the receipt of the chirp signals may be used to direct the two users of the two mobile computing devices toward one another. It may be noted that the server computer 300 may already know the identity of the users using the two mobile computing devices based on the identity information associated with the two mobile computing devices.
  • the mobile computing device 100 is an audio/video enabled device (e.g., an iPhone).
  • the chirp signal generating module 141 allows a user holding the mobile computing device 100 to detect and locate other users holding similar mobile computing devices within the vicinity.
  • the audio processing module 140 may allow detection of people within the vicinity based on both mobile computing devices transmitting and receiving the chirp signals or based on using facial recognition engine 320 (see FIG. 3A ).
  • one audio-signal-based-distance-calculation methodology that may be used is as follows.
  • the two mobile computing devices transmit/broadcast chirp signals to each other to work out the distance between them.
  • a third mobile computing device can also listen and identify the two chirp signals from the other two mobile computing devices, and thereby enable the calculation of the exact position (using X-Y coordinates).
  • the chirp signals frequencies are used to detect proximity of the two users.
  • the two mobile computing devices broadcast the chirp signals in turn.
  • Each mobile computing device with its microphone and/or audio receiver notes/detects the times when the chirp signals were broadcast and detected. Based on these time values, the distance between the two mobile computing devices is calculated.
  • the audio processing module 140 of one mobile computing device is configured to calculate the distance to the other mobile computing device (or the user holding the device). Alternatively, the audio processing module 140 also allows the calculation of the exact position (exact distance and direction) of the other person, when a third observing mobile computing device (placed at a predetermined position) is employed.
  • the audio processing module 140 is configured to triangulate the positions of all three mobile computing devices. The audio processing module 140 then will generate approximate direction of the other mobile computing device by text indicating direction and distance.
  • the audio processing module 140 may insert an arrow in the video stream being played on the mobile computing device.
  • the arrow may indicate the direction that the user of the mobile computing device should walk to get to the other person.
  • the direction information may overlay the video stream being viewed on the display screen.
  • an alternative implementation may use notification signals from both mobile computing devices and communicated to the server computer 300 to determine distance between the mobile computing devices when the use of the facial recognition operations may not be possible.
  • the notification may be generated by the audio processing module 140 to enable the users to identify and locate the other mobile computing devices or users within the same vicinity.
  • the audio processing module 140 may include coded algorithms that enable generating chirping pattern at a set audio frequencies and detecting the chirp signals.
  • the algorithms also enable determining distance from the current mobile computing device to the mobile computing device that transmits or broadcasts the detected chirp signals. Algorithms are also employed to minimize the distance calculation errors due to acoustic echo paths. Rather than generating a high frequency/low frequency signals beyond the capabilities/range of operation of a standard mobile computing device's speaker system and microphone system to avoid background noise, the chirp signals may be a series of high and low frequency bursts within the standard range of both the microphone and speaker system but that burst sequence at those frequencies does not happen naturally in nature.
  • the audio processing module 140 has signal processing filters to look for specifically that pattern in those frequencies to identify both when a chirp signal is detected and what the distance is between the two mobile computing devices.
  • the video stream is transmitted to the server computer 300 and analyzed by the server computer 300 for facial recognition.
  • the identity of the desired user is transmitted to the server computer 300 and the images and different views are transmitted to the mobile computing device.
  • the server computer 300 stores the photo's for facial recognition in the facial recognition database 340 and transmits to the mobile computing device 100 the facial recognition image/set of images front side, right side and left side profile to be matched with by the video processing module 135 making the facial recognition faster and easier by the video processing module 135 of the mobile computing device 100 .
  • one or more types of rapid facial recognition software that looks at features such as skin tone, facial features such as eyes may be incorporated into the video processing module 135 .
  • This process may be useful in large crowded public places such as in bar, sports arena or theme park, first time meet and greets, etc.
  • the integration of audio based distance calculation and scene analysis allows the creation of dynamically formed mobile communities.
  • the system creates mobile communities automatically, enabling users to connect to people with similar interests they would otherwise never have met. A user in the vicinity of someone with a similar profile will be alerted and given the directions to meet another user.
  • the mobile computing device 100 may include a power source (e.g., a battery), a subscriber identity module (SIM), a keyboard (although soft keyboard may be implemented), input/output interfaces (e.g., video, audio ports), external power connector, external memory connectors, an antenna, a speaker, etc.
  • a power source e.g., a battery
  • SIM subscriber identity module
  • keyboard although soft keyboard may be implemented
  • input/output interfaces e.g., video, audio ports
  • external power connector e.g., external memory connectors
  • an antenna e.g., a speaker, etc.
  • non-mobile devices having similar features may also be used to transmit the visual information and to receive the augment information.
  • FIG. 2 illustrates an example of a network that may be used to augment a captured video stream, in accordance with some embodiments.
  • Network 200 may be an Internet.
  • Multiple server computers 205 A- 205 C and multiple mobile computing devices 210 A- 210 D may be connected to the network 200 .
  • Each of the server computers 205 A- 205 C may be associated with a database 206 A- 206 C, respectively.
  • the mobile computing devices 210 A- 210 D may be referred to as the mobile computing devices.
  • the network environment illustrated in this example may be referred to as the client-server environment.
  • the client-server relationship allows the operations of the mobile computing device 205 A- 205 C to be triggered anywhere in the world and to augment any captured video stream with useful information enhancing the user's view of the real world.
  • the mobile computing devices 210 A- 210 D may include features similar to the mobile computing device 100 described in FIG. 1 .
  • the server computers 205 A- 205 C may include communication modules and associated applications that allow them to be connected to the network 200 and to exchange information with the mobile computing devices 210 A- 210 D.
  • a user using the mobile computing device 210 A may interact with web pages that contain embedded applications, and then supply input to the query/fields and/or service presented by a user interface associated with the applications.
  • the web pages may be served by the server computer 205 A on the Hyper Text Markup Language (HTML) or wireless access protocol (WAP) enabled mobile computing device 205 A or any equivalent thereof.
  • the mobile computing device 205 A may include browser software (e.g., Internet Explorer, Firefox) to access the web pages served by the server computer 205 A.
  • FIG. 3A illustrates an example of a server computer that may be used to determine augment information for use with a captured video stream, in accordance with some embodiments.
  • Server computer 300 may include communication module (not shown) to allow it to be connected to a network such as the network 200 illustrated in FIG. 2 .
  • the server computer 300 may also include server applications that allow it to communicate with one or more mobile computing devices including, for example, the mobile computing device 100 . Communication sessions may be established between the server computer 300 and the mobile computing device 100 to enable the receipt of the visual information 306 from the mobile computing device 100 and the transmission of the augment information 391 to the mobile computing device 100 .
  • the server computer 300 may be coupled with object database 330 , facial recognition database 340 , and augment information database 350 .
  • the client module uses an extraction algorithm to identify the features of the points of interest in that frame, extracts those features along with data such as geographical information, compass direction, and other relevant information, and transmits that packet of information about that frame up to the IDOL server.
  • the IDOL server has the knowledge base and distributed computing power to identify the point of interest.
  • the IDOL server can analyze the series of frames coming in the video stream, and use this information to match the transmitted features of the points of interest to known objects or images in the database.
  • the augment engine 325 is preparing and selecting augment information to be transmitted back to the video processing module 135 on the mobile computing device 100 for display.
  • the augment engine 325 has a database of prepared augment information (e.g., video files, advertisements, links, etc.) to overlay onto known points of interest in the frames.
  • the augment engine 325 narrows down the possible overlay to add into the video file based on potentially what is relevant to that user.
  • the augment engine 325 can start transmitting to the mobile computing device 100 the potential large files such as video files, and advertisements while the object recognition engine 310 determines what object is. Otherwise the augment engine 325 can start transmitting the video files, and advertisements and images, textual messages, links to relevant web pages, etc. after the point of interest is identified.
  • the video processing module 135 then overlays the augment information onto the frames of the video stream.
  • the augment information may be a textual message or highlights of the points of interest. The user can choose to activate the highlighted point of interest to view the augment information associated with the frames of the video file being displayed on the display screen 106 of the mobile computing device 100 .
  • the object database 330 may be configured to store information about a group of known objects.
  • the information may describe the different characteristics of the known objects. This may include geographical information, color information, pattern information, and so on.
  • the characteristics of the object may include any information about the object that may be useful to identify the object and recognize it as a known object. For example, an office building located on the corner of Fourth Street and Broadway Avenue in downtown San Francisco may be identified based on its unique pyramid shape architecture and orange color.
  • the object database 330 may be a large database when it is configured to store information about many objects or many groups of objects. Many techniques may be used to generate the information about the objects. For example, the information may be generated by human, or it may be generated by a special computer application coded to scan a color image and generate a list of objects included in the image along with their characteristics.
  • the facial recognition database 340 may store facial recognition information for a group of known people.
  • the facial recognition information for each person in the group may have previously been generated and stored in the facial recognition database 340 .
  • the facial recognition database 340 may be a large database when it is configured to store facial recognition information for many people. Many techniques may be used to generate and store the facial recognition information. For example, a person use a facial recognition application to generate own facial recognition information and request to have it stored in the facial recognition database 340 .
  • the augment information database 340 may be configured to store information that may be inserted into the captured video stream 305 .
  • the information may include identification information (e.g., the university), advertisement information (e.g., restaurant discount coupons), link information (e.g., a URL link to the website of a restaurant), facial information (e.g., Bob Smith), etc.
  • identification information e.g., the university
  • advertisement information e.g., restaurant discount coupons
  • link information e.g., a URL link to the website of a restaurant
  • facial information e.g., Bob Smith
  • the server computer 300 may include an object recognition engine 310 , a facial recognition engine 320 , and an augment engine 325 .
  • the object recognition engine 310 may be configured to receive the characteristics of the objects from the mobile computing device 100 .
  • the object recognition engine 310 can be configured to take advantage of distributed workload computing across multiple servers to increase the speed of filtering out known images stored in the object database 330 compared to the characteristics information transmitted by the video processing module 135 .
  • the object recognition engine 310 may use the geographical information included in the frames of the captured video stream 305 and the information stored in the object database 330 to recognize the objects. For example, the yellow building with the pyramid shape located at latitude coordinate X and longitude coordinate Y may be recognized as the National Financial Building.
  • the object recognition engine 310 may use a set of filters and apply the filters to the characteristics or visual information received from the mobile computing device 100 to determine whether it can recognize what the object or who the person is. Since the captured video stream 305 is comprised of a series of closely related frames both in time and in approximate location, the frames generally include the same objects and/or persons and the characteristics/visual information may have the same pattern of identified major features of the object (or the points of interest). This may help the object recognition engine 310 to narrow down the matching options that are available in the object database 330 . For example, the object recognition engine 310 may recognize the distinctive features for the point of interest as a billboard or poster for a movie, a restaurant such as McDonalds, a building such as an office, historic landmark, residence, etc.
  • the facial recognition engine 320 may be configured to receive the facial characteristics of the persons from the mobile computing device 100 .
  • the facial recognition engine 320 may use the geographical information included in the frames of the captured video stream 305 and the information stored in the facial recognition database 340 to identify and recognize the persons.
  • the facial recognition engine 320 may also use the geographical information included in the frames to identify a location of the recognized person for direction purposes.
  • the augment engine 325 may be configured to receive the results from the object recognition engine 310 and/or the facial recognition engine 320 to determine how to select the proper augment information to be transmitted to the mobile computing device 100 to augment the identified object in the original video file, and select that augment information 391 from the augment information database 350 .
  • the augment information 391 may be related to the objects or persons that have been recognized by the object recognition engine 310 and/or the facial recognition engine 320 .
  • the augment information 391 may include any information that may provide in-depth information or content about the objects and/or persons included in the frames of the captured video stream 305 .
  • the augment information 391 may include listing of food establishments in various buildings, links to user reviews for a particular business, links to web pages, etc.
  • the augment engine 325 may select the augment information that is most relevant to the user.
  • the object may be an office building with many different businesses, and the object database 330 may include augment information associated with each of the businesses.
  • the augment information associated with an art gallery may be selected because the profile of the user or the operator of the mobile computing device 100 may indicate that the user is only interested in modern arts.
  • the selected augment information 391 may then be transmitted to the mobile computing device 100 and used by the video processing module 135 to generate the augmented video stream 390 .
  • the augmented video stream 390 may then be viewed by the user or used by any other applications that may exist on the mobile computing device 100 . It is within the scope of the embodiments of the invention that the operations of capturing the video stream, processing the captured video stream, recognizing object and/or persons in the captured video stream, augmenting the captured video stream, and presenting the augmented video stream to the user or the other applications occur in real time. For example, the user may capture a video stream 305 and almost instantaneously see the augmented video stream 390 displayed on the display screen 106 of the mobile computing device 100 .
  • the augment information may include graphical information and/or audio information.
  • the graphical augment information may overlay the frames of the captured video stream 305 .
  • the audio augment information may be audible through the speaker 142 of the mobile computing device 100 .
  • the video processing module 135 on the mobile computing device 100 identifies major features of one or more points of interest within each frame of a video stream captured by the video camera 120 , transmits those identified points of interest to the server computer 300 , and displays the augment information overlaying the original captured video stream on the display screen 106 and/or output the audio portion of the augment information with the original captured video stream through the speakers 142 of the mobile computing device 100 .
  • the augment engine 325 may start transmitting potentially large augment information 391 (e.g., video files, advertisements, images, etc.) while the object recognition engine 310 and/or the facial recognition engine 320 are identifying the objects. Otherwise, the augment engine 325 may start transmitting the augment information 391 after the points of interest and the objects are identified.
  • the video processing module 135 may then overlay the augment information onto the video stream.
  • the user may have the option to view the captured video stream as is, or the user may select to view the corresponding augmented video stream.
  • the server computer 300 may be implemented as an Intelligent Data Operating Layer (IDOL) server using the IDOL software product and associated system of Autonomy Corporation of San Francisco, Calif.
  • IDOL Intelligent Data Operating Layer
  • the IDOL server collects indexed data from connectors from various sources to train the engines and stores it in its proprietary structure, optimized for fast processing and retrieval of data.
  • IDOL forms a conceptual and contextual understanding of all content in an enterprise, automatically analyzing any piece of information from over thousands of different content formats and even people's interests. Hundreds of operations can be performed on digital content by IDOL, including hyperlinking, agents, summarization, taxonomy generation, clustering, eduction, profiling, alerting and retrieval.
  • the IDOL Server has the knowledge base and interrelates the feature pattern being transmitted by the video processing module 135 . An example of the modules included in the IDOL server is illustrated in FIG. 7 .
  • the IDOL server enables organizations to benefit from automation without losing manual control. This complementary approach allows automatic processing to be combined with a variety of human controllable overrides, offering the best of both worlds and never requiring an “either/or” choice.
  • the IDOL server integrates with all known legacy systems, eliminating the need for organizations to cobble together multiple systems to support their disparate component.
  • the IDOL sever may be associated with an IDOL connector which is capable of connecting to hundreds of content repositories and supporting over thousands of file formats. This provides the ability to aggregate and index any form of structured, semi-structured and unstructured data into a single index, regardless of where the file resides.
  • the extensive set of connectors enables a single point of search for all enterprise information (including rich media), saving organizations much time and money. With access to virtually every piece of content, IDOL provides a 360 degree view of an organization's data assets.
  • the IDOL servers implement a conceptual technology is context-aware and uses deep audio and video indexing techniques to find the most relevant products, including music, games and videos.
  • the IDOL servers categorize content automatically to offer intuitive navigation without manual input.
  • the IDOL servers also generate links to conceptually similar content without the user having to search.
  • the IDOL servers may be trained with free-text descriptions and sample images such as a snapshot of a product.
  • a business console presents live metrics on query patterns, popularity, and click-through, allowing the operators to configure the environment, set-up promotions and adjust relevance in response to changing demand.
  • the video processing module 135 of the mobile computing device 100 may identify the characteristics of the objects and/or persons and then causes that information to be transmitted to an IDOL server in real time. Thus, it is possible that while the augment engine 325 of the server computer 300 performing its operations for a first set of frames, the video processing module 135 of the mobile computing device 100 may be performing its operations for a second set of frames, and a third set of frames along with the associated augment information may be displayed on the display screen 106 .
  • FIG. 3B illustrates an example of a server computer that may be used to determine augment information for use with a captured video stream, in accordance with some embodiments.
  • the components included in the server computer 300 may be in addition to the components illustrated in FIG. 3A .
  • the server computer 300 may augment identified points of interest within each frame of a video stream with augment information on those points of interest that is more relevant to the user of the specific mobile computing device hosting the video processing module 135 by maintaining a user profile.
  • the system described herein augments each identified points of interest within each frame of a video stream with the augment information (graphical or audio information) on those points of interest that is more relevant to the user of the specific mobile computing device hosting the video processing application 135 .
  • the types of augment information that can be supplied are stored in the augment information database 350 .
  • the server computer 300 uses the mobile computing device's user-specific information in the process of selecting the augment information to be used with the video stream.
  • the video processing module 135 captures the user's habits when the user uses mobile computing device 100 .
  • the user's habit may be captured when the user is capturing a video stream, browsing the Internet, dialing phone numbers, etc.
  • the information may include phone numbers typically called, websites frequency visited, types of products purchased, user's age and gender, home city and address information, etc.
  • the use of user-specific information, as well as the ability to automatically update and refine the information over time, are essential for accurate delivery and targeting of the augment information and differentiate the technique from all predecessors.
  • the video processing module 135 transmits a combination of the features of the points of interest visual information to the server computer 300 , along with a user's individual profile, and a number of additional pieces of information to the server computer 300 .
  • the server computer 300 determines the augment information for the frames of the video stream 305 with information of specific relevance to that user at that position and time.
  • the user-specific's aspects can automatically train and update a user profile of that user which allows the delivery of more pertinent information.
  • the information on his usage is used to build a “profile” to represent his interests, demographics, and/or specific patterns of use.
  • the user's mobile computing device 100 can be deployed to collect information and the video stream from the video camera and transmit the collected information to the server computer 300 . This is used to determine the most pertinent augmentations that can be made to the system for that user at that specific time, and augment the video stream 305 with additional visual or audiovisual objects or images.
  • the user profile database 360 is maintained to represent each user's interests, demographics, and/or specific patterns of use, which can be referenced by the user profile engine 328 and the augment engine 325 when determining what type of augment information to augment a point of interest in the frame of the captured video stream on the mobile computing device 100 .
  • the augment engine 325 may have a set of for example, twenty or more, different ways to augment points of interest whether general augment information that applies to a category of known objects such as a chain restaurant or specific-content augment information that applies to only to the known object as well as different subject matter in the augment information from advertisements to historical points of interest, links to relevant web pages, overlays of street addresses, phone numbers, list of shops in a building, to enhancements such as animations created to enhance that object.
  • the user profile engine 328 assists the augment engine 325 in determining which augment information to select and transmit to the mobile computing device 100 to be added to the frames of the video stream being captured by the mobile computing device 100 .
  • the IDOL server system may automatically profile the way the users interact with each other and with information on their mobile computing devices, build a conceptual understanding of their interests and location to deliver tailored commercial content.
  • the IDOL server provides automatic notification as soon as new tracks and relevant products are released, or location-specific information such as traffic reports and up-to-the-minute news, without the user having to search.
  • FIG. 4 illustrates an example of a network diagram with mirrored servers that may be used to filter information received from the mobile computing devices, in accordance with some embodiments.
  • Server computers 405 M, 405 A, 405 B, and 405 C connected to the network 200 may be configured as IDOL servers.
  • the IDOL servers may include a main IDOL server 405 M and multiple mirrored IDOL servers 405 A- 405 C.
  • the main IDOL server 405 M may mirror its information onto the mirrored IDOL servers 405 A- 405 C.
  • the mirroring may include mirroring the content of the main IDOL server database 406 M into the mirrored IDOL sever databases 406 A- 406 C.
  • the object database 300 , the facial recognition database 340 , and the augment information database 350 may be mirrored across all of the mirrored IDOL servers 405 A- 405 C.
  • the main IDOL server 405 M and the mirrored IDOL servers 405 A- 405 C may be located or distributed in various geographical locations to serve the mobile computing devices in these areas.
  • the main IDOL server 405 M may be located in Paris
  • the mirrored IDOL server 405 A may be located in Boston, 405 B in Philadelphia, and 405 C in New York.
  • Each of the IDOL servers illustrated in FIG. 4 may include its own object recognition engine 310 , facial recognition engine 320 , and augment engine 325 .
  • the distribution of servers within a given location helps to improve the identification and augmentation response time.
  • the mirroring of identical server site locations also helps to improve the identification and augmentation response time.
  • mirroring of identical server site locations aids in servicing potentially millions of mobile computing devices with the video application resident all submitting packets with distinguishing features for the points of interest by distributing the workload and limiting the physical transmission distance and associated time.
  • the IDOL server set being duplicated with the same content and mirrored across the Internet to distribute this load to multiple identical sites to increase both response time and handle the capacity of the queries by those mobile computing devices.
  • the video processing module 135 may include a coded block to call up and establish a persistent secure communication channel with a nearest non-overloaded mirrored site of the main IDOL server when the mobile computing device 100 is used to capture a video stream.
  • the mobile computing device 410 A may be connected with the IDOL server 405 A via communication channel 450 because both are located in Boston.
  • the mobile computing device 410 A may be connected with the IDOL server 405 C in New York because it may not be overloaded even though the IDOL server 405 C may be further from the mobile computing device 410 A than the IDOL server 405 A.
  • a set of IDOL servers may be used to filter the information received from the mobile computing devices.
  • a hierarchical set of filters may be spread linearly across the set of IDOL servers.
  • These IDOL servers may work together in collaboration to process the transmitted object and/or person visual information to determine or recognize what the object or who the person is. For example, when the mobile computing device 410 A establishes the communication channel 450 with the IDOL server 405 A, the IDOL servers 405 A- 405 C may work together to process the information received from the mobile computing device 410 A. This collaboration is illustrated by the communication channel 451 between the IDOL server 405 A and 405 C, and the communication channel 452 between the IDOL server 405 A and 405 B.
  • the IDOL servers 405 C, 405 B and 405 A may work together to process the information received from the mobile computing device 410 B. This collaboration is illustrated by the communication channel 451 between the IDOL server 405 C and 405 A, and the communication channel 453 between the IDOL server 405 C and 405 B.
  • Each server in the set of servers applies filters to eliminate the pattern of features received from the mobile computing device 100 as possible matches to feature sets of known objects in the object database 330 . Entire categories of possible matching objects can be eliminated simultaneously, while subsets even within a single category of possible matching objects can be simultaneously solved for on different servers.
  • Each server may hierarchically rule out potentially known images on each machine to narrow down the hierarchical branch and leaf path to a match or no match for the analyzed object of interest.
  • the mobile computing device 100 has built-in Wi-Fi circuitry, and the video stream is transmitted to an IDOL server on the Internet.
  • the IDOL server set contains an object recognition engine 310 distributed across the IDOL server set, IDOL databases, and an augment engine 325 as well.
  • the object recognition engine 310 distributed across the IDOL server set applies a hierarchical set of filters to the transmitted identified points of interest and their associated major within each frame of a video stream to determine what that one or more points of interest are within that frame. Since this is a video feed of a series of closely related frames both in time and in approximate location, the pattern of identified major features of points of interest within each frame of a video stream helps to narrow down the matching known object stored in the object database 330 .
  • each of the IDOL servers may apply filters to eliminate certain pattern of features as possible matches to features of known objects stored in the object database 330 . Entire categories of objects may be eliminated simultaneously, while subsets even within a single category of objects may be simultaneously identified as potential matching objects by the collaborating IDOL servers.
  • Each IDOL server may hierarchically rule out potential known objects to narrow down the hierarchical branch and leaf path to determine whether there is a match.
  • each of the IDOL servers may match the pattern of the visually distinctive features of the points of interest in the frame to the known objects in the object database 330 .
  • the geometric shape of the features of the point of interest X-Y coordinates may come across to a human like a dot-to-dot connection illustration.
  • recognizing the image/object associated with those dots on the piece of paper is a simple task. This may include comparing the dot-to-dot type geometric shapes transmitted features along with their distinctive colors, recognized text, numbers and symbols, geographical information, direction information relative to the camera to the feature sets stored in the object database 330 .
  • the dot-to-dot type geometric shapes can be subset into distinctive triangles, pyramids, rectangles, cubes, circles and cylinders, etc, each with its own associated distinctive colors or patterns, to aid in the identification and recognition.
  • Each of the IDOL servers may map the collection of feature points about the points of interest to a stored pattern of feature points for known objects to match what's in the frames to the known object.
  • the video processing module 135 may continuously transmit the identified features of the points of interest 306 in the frames of the captured video stream 305 while the object recognition engine 310 (distributed over a large amount of IDOL servers) and augment engine 325 transmits back the augment information to augment identified images/objects in the captured frames of the video file stored in a memory of the mobile computing device 100 when that identified object is being shown on the display in near real time (e.g., less than 5 seconds).
  • near real time e.g., less than 5 seconds
  • the server computer 300 has a set of one or more databases to store a scalable database of visual information on locations such as buildings, and structures, in order to perform subsequent matching of a visual data stream to determine the building or structure that is being viewed.
  • the server-client system addresses the problem of determining the exact location of a mobile user, and to determine exactly what the user is looking at, at any point, by matching it against a database of characteristics associated with those visual images.
  • the system gives the ability to construct a scalable solution to the problem to identify location, regardless of position and with minimal training.
  • the system with the server computer 300 and a set of one or more databases is trained on a set of views of the world and the models derived are stored for future retrieval.
  • the combination of geographical information and visual characteristics allows a faster matching.
  • the mobile computing device can be deployed to collect geospatial information and a video data stream from the camera and feed it back to the system. This is used to pinpoint the objects or locations within view and augment the video stream with additional visual or audiovisual objects or images.
  • FIG. 5 illustrates an example flow diagram of a process that may execute on a mobile computing device to create an augmented video stream, in accordance with some embodiments.
  • the process may be associated with operations that may be performed on the mobile computing device 100 .
  • the mobile computing device 100 may be capturing many frames of a video stream. As the frames are being captured, they are analyzed and characteristics information of objects in the frames is extracted, as shown in block 505 .
  • the extraction may involve the features, the geometric shape information, the distinct colors, the dot-to-dot type pattern, and other relevant information.
  • the extraction may involve generating a pattern of X-Y coordinates of the geometric shapes of the point of interest and the color associated with the shapes, and the geographic coordinates from the GPS modules, the direction information from the direction sensor 122 associated with the video camera 121 of the mobile computing device.
  • the characteristics information and geographical information are transmitted to a server computer (e.g., server computer 300 ) in a network so that the server computer can filter the information and determine the augment information.
  • the server computer that receives the characteristics information may be one that is geographically closest to the mobile computing device 100 . If this server computer is overloaded, a nearby non-overloaded server computer may be selected instead.
  • the selected server computer may collaborate with other mirrored server computers to determine the augment information.
  • the server computers may perform comparing and matching operations using a hierarchical approach.
  • the server computers may find different augment information that may be used. Criteria may be used to select the appropriate augment information to transmit to the mobile computing device 100 .
  • the augment information is received from the server computer. It may be possible that while the mobile computing device 100 is receiving the augment information for a series of frames, the mobile computing device 100 is also preparing characteristics information for another series of frames to be transmitted to the server computer. In general, for each frame in the video stream, a transmission packet containing the characteristics information of the point(s) of interest is transmitted to the server computer from the mobile computing device 100 .
  • the mobile computing device 100 may use the augment information to overlay the appropriate frames of the video stream and create an augmented video stream.
  • the augmented video stream is displayed on the display screen 106 of the mobile computing device 100 .
  • FIG. 6A illustrates an example flow diagram of a process that may execute on a server computer to determine augment information, in accordance with some embodiments.
  • the operations associated with this process may be performed by many servers working collaboratively to provide the results to the mobile computing device in almost real time.
  • the process may start at block 605 where the characteristics and geographical information are received from the mobile computing device 100 .
  • Direction information of the video camera 121 may also be received from the direction sensor 122 .
  • the information transmitted from the mobile computing device 100 may be compressed.
  • the server may include decompression logic to decompress the information.
  • the server may also include compression logic to compress the augment information if necessary.
  • the servers may perform comparing and matching or recognition operations. This may include filtering and eliminating any known objects that do not possess the same characteristics. This may include narrowing down to potential known objects that may possess the same characteristics.
  • the server may need to determine which augment information to select, as shown in block 615 .
  • the augment information is transmitted to the mobile computing device 100 . It may be possible that while the server is transmitting the augment information for a set of frames of a video stream, the server is also performing the operations in block 610 for another set of frames associated with the same video stream. It may be noted that the processes described in FIG. 5 and FIG. 6A may also be used to perform facial recognition using the facial recognition engine 320 and the facial recognition database 340 .
  • FIG. 6B illustrates an example flow diagram of a process that may execute on a server computer to determine augment information based on user profile, in accordance with some embodiments.
  • the operations associated with this process may be performed by an IDOL server and may expand on the operations described in block 615 of FIG. 6A .
  • the process may start at block 625 where the identity of the mobile computing device 100 is verified.
  • the identity information of the mobile computing device 100 may have been transmitted to the server computer 300 during the initial communication such as, for example, during the establishing of the communication channel between the mobile device 100 and the server computer 300 .
  • the identity information may be used by the user profile engine 328 to determine the appropriate user profile from the user profile database 360 , as shown in block 630 .
  • the user profile may have been collected as the mobile computing device 100 is used by the user over time.
  • the user profile may include specific user-provided information.
  • the augment information may be selected based on the information in the user profile. This allows relevant augment information to be transmitted to the mobile computing device 100 for augmentation of the video stream 305 , as shown in block 640 .
  • FIG. 6C illustrates an example flow diagram of a process that may be used to determine distance based on the chirp signals generated by the mobile computing devices, in accordance with some embodiments.
  • the process may operate after the facial recognition operations by the facial recognition engine 320 have been performed and positive recognition has occurred.
  • the process may start at block 650 where the two mobile computing devices make initial chirp communication.
  • the first mobile computing device broadcasts the chirp signal a predetermined number of times (e.g., three times) and notes the clock times at which they were broadcast.
  • the second mobile computing device records an audio signal and detects the chip signals and their clock times.
  • the procedure is reversed after a few seconds of pause (e.g., five (5) seconds) when the second mobile computing device broadcasts its chirp signal for the same predetermined number of times.
  • the second device then notes its broadcast time, and sends detection time and broadcast time to the first device.
  • the first mobile computing device detects the chirp signals of the second mobile computing device in its recorded audio signal.
  • a first formula is used to determine the distance between the two mobile computing devices based on the measured clock times.
  • a third mobile computing device listening to the two chirp signal broadcasts by the first and second mobile computing devices also detects them in its recorded audio signal and reports the times to the first mobile computing device.
  • the third mobile computing device may be placed in a pre-determined location.
  • a second formula is used to calculate the position (x,y) of the second mobile computing device with respect to itself and the third mobile computing device and triangulates the position and distance among all three mobile computing devices.
  • the video processing module 135 of the first and second mobile computing devices then overlays arrows or footsteps on the video stream being displayed on each respective display screen to indicate which direction each user of the first and second mobile computing device should proceed in to meet up.
  • a combination of scene analysis, facial recognition, and subsequence audio signal is used to detect and determine a spatially-accurate location of one or more mobile computing devices
  • IOL Intelligent Data Operating Layer
  • FIG. 7 illustrates an example block diagram of some modules of an IDOL server, in accordance with some embodiments.
  • IDOL server 700 may include automatic hyperlinking module 705 , automatic categorization module 710 , automatic query guidance module 715 , automatic taxonomy generation module 720 , profiling module 725 , automatic clustering module 730 , and conceptual retrieval module 735 .
  • the automatic hyperlinking module 705 is configured to allow manual and fully automatic linking between related pieces of information. The hyperlinks are generated in real-time at the moment the document is viewed.
  • the automatic categorization module 710 is configured to allow deriving precise categories through concepts found within unstructured text, ensuring that all data is classified in the correct context.
  • the automatic query guidance module 715 is configured to provide query suggestions to find most relevant information. It identifies the different meanings of a term by dynamically clustering the results into their most relevant groupings.
  • the automatic taxonomy generation module 720 is configured to automatically generate taxonomies and instantly organizes the data into a familiar child/parent taxonomical structure. It identifies, names and creates each node based on an understanding of the concepts with the data set as a whole.
  • the profiling module 725 is configured to accurately understand individual's interests based on their browsing, content consumption and content contribution. It generates a multifaceted conceptual profile of each user based on both explicit and implicit profiles.
  • the automatic clustering module 730 is configured to help analyze large sets of documents and user profiles and automatically identify inherent themes or information clusters. It even cluster unstructured content exchanged in emails, telephone conversations and instant messages.
  • the conceptual retrieval module 735 is configured to recognize patterns using a scalable technology that recognizes concepts and find information based on words that may not be located in the documents.
  • the IDOL server 700 may also include other modules and features that enable it to work with the mobile computing device 100 to generate the augmented video stream as described herein. As described above, one or more of the modules of the IDOL server 700 may be used to implement the functionalities of the object recognition engine 310 , the facial recognition engine 320 , the augment engine 325 , and the user profile engine 328 .
  • FIG. 8 illustrates an example computer system that may be used to implement an augmented video stream, in accordance with some embodiments.
  • Computing environment 802 is only one example of a suitable computing environment and is not intended to suggest any limitations as to the scope of use or functionality of the embodiments of the present invention. Neither should the computing environment 802 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in FIG. 8 .
  • Embodiments of the invention may be operational with general purpose or special purpose computer systems or configurations.
  • Examples of well-known computer systems that may be used include, but are not limited to, personal computers, server computers, hand-held or laptop devices, Tablets, Smart phones, Netbooks, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • Embodiments of the present invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer system.
  • program modules include routines, programs, databases, objects, components, data structures, etc. that performs particular tasks or implement particular abstract data types.
  • Those skilled in the art can implement the description and/or figures herein as computer-executable instructions, which can be embodied on any form of computer readable media discussed below.
  • Embodiments of the present invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.
  • the computing environment 802 includes a general-purpose computer system 810 .
  • Components of the computer system 810 may include, but are not limited to, a processing unit 820 having one or more processing cores, a system memory 830 , and a system bus 821 that couples various system components including the system memory to the processing unit 820 .
  • the system bus 821 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) locale bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • Computer system 810 typically includes a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer system 810 and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer readable mediums uses include storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage mediums include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer system 810 .
  • Communication media typically embodies computer readable instructions, data structures, program modules or other transport mechanism and includes any information delivery media.
  • the system memory 830 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 831 and random access memory (RAM) 832 .
  • ROM read only memory
  • RAM random access memory
  • BIOS basic input/output system 833
  • RAM 832 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 820 .
  • FIG. 8 illustrates operating system 834 , application programs 835 , other program modules 836 , and program data 837 .
  • the computer system 810 may also include other removable/non-removable volatile/nonvolatile computer storage media.
  • FIG. 8 illustrates a hard disk drive 841 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 851 that reads from or writes to a removable, nonvolatile magnetic disk 852 , and an optical disk drive 855 that reads from or writes to a removable, nonvolatile optical disk 856 such as a CD ROM or other optical media.
  • removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, USB drives and devices, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
  • the hard disk drive 841 is typically connected to the system bus 821 through a non-removable memory interface such as interface 840
  • magnetic disk drive 851 and optical disk drive 855 are typically connected to the system bus 821 by a removable memory interface, such as interface 850 .
  • the drives and their associated computer storage media discussed above and illustrated in FIG. 8 provide storage of computer readable instructions, data structures, program modules and other data for the computer system 810 .
  • hard disk drive 841 is illustrated as storing operating system 844 , application programs 845 , other program modules 846 , and program data 847 .
  • operating system 844 application programs 845 , other program modules 846 , and program data 847 .
  • these components can either be the same as or different from operating system 834 , application programs 835 , other program modules 836 , and program data 837 .
  • the operating system 844 , the application programs 845 , the other program modules 846 , and the program data 847 are given different numeric identification here to illustrate that, at a minimum, they are different copies.
  • a participant may enter commands and information into the computer system 810 through input devices such as a keyboard 862 , a microphone 863 , and a pointing device 861 , such as a mouse, trackball or touch pad.
  • Other input devices may include a joystick, game pad, scanner, or the like.
  • These and other input devices are often connected to the processing unit 820 through a user input interface 860 that is coupled with the system bus 821 , but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
  • a monitor 891 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 890 .
  • computers may also include other peripheral output devices such as speakers 897 and printer 896 , which may be connected through an output peripheral interface 890 .
  • the computer system 810 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 880 .
  • the remote computer 880 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer system 810 .
  • the logical connections depicted in FIG. 8 include a local area network (LAN) 871 and a wide area network (WAN) 873 , but may also include other networks.
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • the computer system 810 When used in a LAN networking environment, the computer system 810 is connected to the LAN 871 through a network interface or adapter 870 . When used in a WAN networking environment, the computer system 810 typically includes a modem 872 or other means for establishing communications over the WAN 873 , such as the Internet.
  • the modem 872 which may be internal or external, may be connected to the system bus 821 via the user-input interface 860 , or other appropriate mechanism.
  • program modules depicted relative to the computer system 810 may be stored in a remote memory storage device.
  • FIG. 8 illustrates remote application programs 885 as residing on remote computer 880 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • some embodiments of the present invention may be carried out on a computer system such as that described with respect to FIG. 8 .
  • some embodiments of the present invention may be carried out on a server, a computer devoted to message handling, handheld devices, or on a distributed system in which different portions of the present design may be carried out on different parts of the distributed computing system.
  • the communication module 872 may employ a Wireless Application Protocol (WAP) to establish a wireless communication channel.
  • WAP Wireless Application Protocol
  • the communication module 872 may implement a wireless networking standard such as Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, IEEE std. 802.11-1999, published by IEEE in 1999.
  • Examples of mobile computing devices may be a laptop computer, a tablet computer, Netbook, cell phone, a personal digital assistant, or other similar device with on board processing power and wireless communications ability that is powered by a Direct Current (DC) power source that supplies DC voltage to the mobile computing device and that is solely within the mobile computing device and needs to be recharged on a periodic basis, such as a fuel cell or a battery.
  • DC Direct Current

Abstract

An mobile computing device includes a video capturing module to capture a video stream, a global positioning system (GPS) module to generate geographical information associated with frames of the video stream to be captured by the video capturing device, and a video processing module to analyze the frames of the video stream and extract features of points of interest included in the frames. The video processing is configured to transmit the features of the points of interest and the geographical information to a server and to receive augment information from the server computer using wireless communication. The video processing module uses the augment information to overlay the frames of the video stream to generate an augmented video stream.

Description

    FIELD
  • Embodiments of the present invention generally relate to the field of digital image processing, and in some embodiments, specifically relate to inserting information messages into videos.
  • BACKGROUND
  • Various types of video capturing devices are available in the market today at very affordable prices. This allows many consumers the ability to capture video for any occasions at any place and any time. Typically, the content of the captured video is limited to what is visible to the operator of the video capture device. For example, when the operator is videotaping a building because of its unique architecture, what the operator sees in a viewfinder or on a display of the video capturing device are images of the same building and nothing more.
  • SUMMARY
  • For some embodiments, a mobile computing device may be configured to enable a user to capture a video stream and view the video stream after the video stream has been augmented in real time. The mobile computing device, equipped with a global positioning system (GPS), may generate geographical information for the video stream. A server computer coupled with the mobile computing device may be configured to receive visual information about points of interest in the video stream and the geographical information from the mobile computing device. The server computer then identifies augment information and transmits the augment information to the mobile computing device. The augment information may be used to augment the captured video stream to create an augmented video stream, which may be viewed by the user on a display screen of the mobile computing device.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The multiple drawings refer to the embodiments of the invention. While embodiments of the invention described herein is subject to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail.
  • FIG. 1 illustrates one example of a mobile computing device that may be used, in accordance with some embodiments.
  • FIG. 2 illustrates an example of a network that may be used to augment a captured video stream, in accordance with some embodiments.
  • FIG. 3A illustrates an example of a server computer that may be used to determine augment information for used with a captured video stream, in accordance with some embodiments.
  • FIG. 3B illustrates an example of user profile information, in accordance with some embodiments.
  • FIG. 4 illustrates an example of a network diagram with mirrored servers that may be used to filter information received from the mobile computing devices, in accordance with some embodiments.
  • FIG. 5 illustrates an example flow diagram of a process that may execute on a mobile computing device to create an augmented video stream, in accordance with some embodiments.
  • FIG. 6A illustrates an example flow diagram of a process that may execute on a server computer to determine augment information, in accordance with some embodiments.
  • FIG. 6B illustrates an example flow diagram of a process that may execute on a server computer to determine augment information based on user profile, in accordance with some embodiments.
  • FIG. 6C illustrates an example flow diagram of a process that may be used to determine distance based on the chirp signals generated by the mobile computing devices, in accordance with some embodiments.
  • FIG. 7 illustrates an example block diagram of some modules of an IDOL server, in accordance with some embodiments.
  • FIG. 8 illustrates an example computer system that may be used to implement an augmented video stream, in accordance with some embodiments.
  • DETAILED DESCRIPTION
  • For some embodiments, a mobile computing device is configured to augment video streams with augment information received from a server computer connected to a network is disclosed. The mobile computing system includes a processor, a memory, a built in battery to power the mobile computing device, a built-in video camera, a display screen, and built-in Wi-Fi circuitry to wirelessly communicate with the server computer. The mobile computing device includes a video capturing module coupled with the processor and configured to capture a video stream, a global positioning system (GPS) module coupled with the video capturing module and configured to generate geographical information associated with frames of the video stream to be captured by the video capturing module. The mobile computing device also includes a video processing module coupled with the video capturing module and configured to analyze the frames of the video stream and extract features of points of interest included in the frames. The video processing module is also configured to cause transmission of the features of the points of interest and the geographical information to the server computer and to receive the augment information from the server computer. The video processing module is configured to 1) overlay, 2) highlight, or 3) combination of both the points of interests in the frames of the video stream with the augment information to generate an augmented video stream. The augmented video stream is then displayed on a display screen of the mobile computing device.
  • In the following description, numerous specific details are set forth, such as examples of specific data signals, components, connections, etc. in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well known components or methods have not been described in detail but rather in a block diagram in order to avoid unnecessarily obscuring the present invention. Thus, the specific details set forth are merely exemplary. The specific details may be varied from and still be contemplated to be within the spirit and scope of the present invention.
  • Some portions of the detailed descriptions which follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. These algorithms may be written in a number of different software programming languages such as C, C++, Java, or other similar languages. Also, an algorithm may be implemented with lines of code in software, configured logic gates in software, or a combination of both. In an embodiment, the logic consists of electronic circuits that follow the rules of Boolean Logic, software that contain patterns of instructions, or any combination of both.
  • It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers, or other such information storage, transmission or display devices.
  • The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled with a computer system bus. Portions of any modules or components described herein may be implemented in lines of code in software, configured logic gates in software, or a combination of both, and the portions implemented in software are tangibly stored on a computer readable storage medium.
  • The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method blocks. The required structure for a variety of these systems will appear from the description below.
  • In the following description of exemplary embodiments, reference is made to the accompanying drawings that form a part hereof, and in which it is shown by way of illustration specific embodiments in which the invention can be practiced. It is to be understood that other embodiments can be used and structural changes can be made without departing from the scope of the embodiments of this invention. As used herein, the terms “couple,” “connect,” and “attach” are interchangeable and include various forms of connecting one part to another either directly or indirectly. Also, it should be appreciated that one or more structural features described in one embodiment could be implemented in a different embodiment, even if not specifically mentioned as being a feature thereof.
  • Overview
  • Embodiments of the present invention provide a scalable way of combining two or more data sources including using the visual information to trigger augmentations and the geographical location to allow advanced augmentation of the captured video stream. Information presented by video streams is typically limited to what is visible or audible to the users such as geometric shapes, colors patterns associated with that shape, symbols and other features associated with objects in that video stream. There may be much more in-depth information associated with the scenes in the video streams that is not conveyed to the user. The use of visual information or characteristics information about points of interest or objects alone to augment a video stream may be useful but may not be sufficient or scalable when the volume of visual information or characteristics information is large. The use of geographical information alone may not permit the augmentation of specific objects or views of the scenes in the video stream.
  • Combining the visual information and the geographical information may allow a rapid recognition or matching to the characteristics of objects that are known and pre-stored in an object database. The geographical information may be provided by a global positioning system (GPS). Combining the visual information with the geographical information may reduce the amount of possible points of interest that need to be sorted through by a server computer to identify and recognize known objects and/or persons. The rough geographical information from the GPS reduces the amount of possible points of interest that need to be sorted through as a possible match to known objects in that area. Further, direction information about where a video camera of the mobile computing device is facing when capturing the video stream is also transmitted to the server computer. The direction information may be provided by a built-in compass or direction sensor in the mobile computing device to the server computer along with the features of the points of interest in that frame. All of these assist in reducing the sheer number of potential views to comparing the characteristics information transmitted from the mobile computing device to known objects stored in a database making a scalable and manageable system.
  • Mobile Computing Device and Generation of Augmented Video Streams
  • FIG. 1 illustrates one example of a mobile computing device that may be used, in accordance with some embodiments. Mobile computing device 100 may include display module 105, communication module 110, global positioning system (GPS) module 115, video capturing module 120, processor 125, and memory 130. The mobile computing device 100 may be, for example, a cellular phone, a laptop, a netbook, a touch pad, or any other similar devices. The mobile computing device 100 cooperates with the network 200 (see FIG. 2) to supply augment information to points of interest captured in the frames of a video stream in the mobile computing device 100 based on a combination of geographical and visual information. The mobile computing device 100 includes video processing module 135 on the mobile computing device 100 to assist in the identification of objects captured in each video frame as well as then insert the augment information into the frames of the video stream.
  • The communication module 110 may be used to allow the mobile computing device 100 to be connected to a network such as, for example, the network 200 (see FIG. 2). The communication module 110 may be configured to enable the mobile computing device 100 to connect to the network 200 using wireless communication protocol or any other suitable communication protocols. For example, the communication module 110 may include a wireless fidelity (Wi-Fi) module 111, a Bluetooth module 112, a broadband module 113, a short message service (SMS) module 114, and so on. As will be described, the communication module 110 may be configured to transmit visual information associated with a video stream from the mobile computing device 100 to one or more server computers connected to the network 200.
  • The GPS module 115 may be used to enable the user to get directions from one location to another location. The GPS module 115 may also be used to enable generating the geographical information and associating the geographical information with images and frames of video streams. This process is typically referred to as geotagging. When the mobile computing device 100 is used to capture a video stream, the geographical information may be inserted into one or more the frames of the video stream. The geographical information may be inserted and stored with images, video streams, and text messages generated by the mobile computing device 100. The geographical information may be stored as metadata, and may include latitude and longitude coordinates. For example, the server system for the tagging and augmentation of geographically-specific locations can use a location of a building in an image by using the latitude and longitude coordinates associated or stored with that image and other distinctive features of the building to determine what objects are appearing in a video stream.
  • The video capturing module 120 may be configured to capture images or video streams. The video capturing module 120 may be associated with a video camera 121 and may enable a user to capture the images and/or the video streams. The video capturing module 120 may be associated with a direction sensor 122 to sense the direction that the video camera 121 is pointing to. The video camera 121 may be a built-in video camera.
  • The display module 105 may be configured to display the images and/or the video streams captured by the video capturing module 120. For some embodiments, the display module 105 may be configured to display the images and/or the video streams that have been augmented with the augment information stored in a database in the network. The display module 105 may be associated with a display screen 106.
  • The memory 130 may include internal memory and expansion memory. For example, the internal memory may include read-only memory (ROM) and random access memory (RAM), and the expansion memory may include flash memory. The memory 130 may be used to store an operating system (OS) and various other applications including, for example, productivity applications, entertainment applications, communication applications, image and/or video processing applications, user interface applications, etc. The processor 125 may be configured to execute instructions associated with the OS, network browsers, and the various applications. Some examples of the OS may include Android from Google, iOS from Apple, Windows Phone from Microsoft, and WebOS from Palm/HP, and so on. The network browsers may be used by the mobile computing device 100 to allow the user to access websites using the network 200.
  • For some embodiments, the mobile computing device 100 may include a video processing module 135 configured to process images and/or video streams captured by the video capturing module 120. The video processing module 135 may analyze the frames of the captured video stream and identify the objects/points of interest within each frame of the captured video stream. Identifying the points of interest for an object may include breaking the object into geometric shapes and distinctive features. The operations may apply to a set of objects with each object in the set broken down into different geometric shapes and associated distinctive features.
  • The video processing module 135 may use an extraction algorithm to identify the features of the points of interest in a frame and extract those features, along with the geographical information, and other relevant information and transmits that packet of information about that frame up to the server computer (see FIG. 3A), for each frame being captured by the video camera 121. The video processing module 135 may generate a pattern of X-Y coordinates of the geometric shapes of the point of interest and the color associated with the shapes. The video processing module 135 may extract the direction information from a compass or direction sensor 122 associated with the video camera 121 to determine the direction that the video camera 121 is facing when capturing the frames in the video stream. The direction information provided by the direction sensor 122 may include north, south, east, west, up, down, and any possible related combinations (e.g., Northwest and up 20 degrees from a horizontal plane, etc.). For some embodiments, the pattern of points used for the points of interests, the amount of points used, and the amounts of points of interest may be dependent on the amount of distinct points of interest in the frame. Non-centered or periphery objects in the frame, small objects, and non-distinctive objects can be filtered out by the extraction algorithm, while only bold and distinctive features on the points of interest may be extracted.
  • The video processing module 135 may analyze each captured frame of the video stream. The video processing module 135 may relate patterns from the series of frames to assist in determining what the points/objects of interest are. The video processing module 135 may relate patterns from the series of frames to assist in to enable faster transmission of the features of the points of interest. For some embodiments, no transmission of the features from a particular frame may be necessary if there is no change to the same features that were previously transmitted. For some embodiments, if a current frame includes features that are different from the previous frame, only the difference in the change of features is transmitted.
  • For some embodiments, the objects/points of interest may generally be located in the center area of the frames. It may be noted that certain consecutive frames of the captured video stream may have the same object in the center area or at least contained within the series of consecutive frames. The video processing module 135 may analyze these frames to identify the characteristics or visual information of the object. As the video capturing module 135 continues to capture the video stream, it may be possible that the video processing module 135 may identify many different objects.
  • The video processing module 135 may perform basic scene analysis including using optical character recognition (OCR) to extract the distinctive features of the points of interest within the frames of the captured video stream, code them into the small pattern of X-Y coordinates for geometric shape format with associated distinctive color and pattern information for that feature. The video processing module 135 may identify the geographical information of that object and other known distinctive features for that object. For some embodiments, the information transmitted by the mobile computing device 100 to the server computer may be in the form of texts.
  • The above operations performed by the video processing module 135 can be used to minimize the size of the file being transmitted to the server and hasten the near real time recognition by the server of the points of interest and near real time transmitting the augment information to the mobile computing device 100. Rather than trying to transmit a JPEG or MPEG type file, the video processing module 135 identifies and extracts distinctive features including shapes, dot-to-dot type X-Y coordinates of the shapes, patterns colors, letters numbers, symbols, etc. associated with objects/points of interest in the video frame to minimize the size of the file being transmitted to the server computer and hasten the near real time recognition by the server computer of the points of interest and the near real time transmission of the augment information to the mobile computing device 100. The augment information is to be overlaid onto the points of interest or highlighted on the points of interest so the user can activate to view and/or hear the augment information overlaid with the captured video stream. As the transmission speeds increase, the entire images may be transmitted on a continuous basis to the server computer. Other techniques that may be used to reduce the amount of information transmitted between the mobile computing device 100 and the server computer may include transmitting the color images in black and white gray scale, transmitting reduced dots per inch (DPI) images, etc.
  • For some embodiments, the points of interest in a frame may be related to a person. The video processing module 135 may be configured to analyze the frames of the captured video stream and identify facial characteristics or visual information of a person that may be in the center area of the frames. As the video processing module 135 analyzes the many frames of the captured video stream, it is possible that the video processing module 135 may identify many different persons.
  • The video processing module 135 may include a compress-decompress (codec) module 136. For some embodiments, the codec 136 may compress the captured video stream into a DivX format. DivX is a video compression technology developed by DivX, LLC of San Diego, Calif. The DivX format may enable users to quickly play and create high-quality video streams. DivX codec is a popular Moving Picture Experts Group-4 (MPEG-4) based codec because of its quality, speed and efficiency. As a DivX codec, the codec 136 may enable the captured video streams and/or the identified features or characteristics information of the objects/points of interest to be quickly transmitted to a server computer where the communication bandwidth may be limited (e.g., wireless communication). Other techniques that enable fast transmission of information from the mobile computing device to a server computer may also be used. For example, instead of transmitting an image or a captured video stream in its original color, a conversion may be performed to convert the image or the captured video stream from color to black and white to reduce the size of the information to be transferred.
  • Chirp Signals Transmission, Detection, Location Approximation
  • The mobile computing device 100 with potentially a little interaction with the server computer may detect and determine a spatially-accurate location of one or more mobile computing devices using audio and/or visual information. For some embodiments, the mobile computing device 100 may include an audio processing module 140 to process audio information. The audio processing module 140 may include a chirp signal generating module 141 and speakers 142. The chirp signal generating module 141 may be configured to transmit chirp signals in a certain frequency pattern (e.g., high frequency noise, low frequency noise). The chirp signals may be transmitted by the mobile computing device 100 and received by another mobile computing device located nearby. A time gap between when the chirp signal is transmitted and when it is received may be used to estimate how far the two mobile computing devices are from one another. A first mobile computing device in this example may transmit its own chirp signals and may receive the chirp signals transmitted by a second mobile computing device. The difference in the high and low frequency signals may be used to determine the distance traveled by the chirp from the first (or sending) mobile computing device and the second (or receiving) mobile computing device.
  • In an alternative, a mobile computing device may transmit a time-stamped notification to the server computer 300 to indicate that a chirp signal has been transmitted. Another mobile computing device may transmit a time-stamped notification to the server computer 300 to indicate that a chirp signal has been received or detected. The server computer 300 then calculates the distance between the two mobile computing devices based on the time difference between the transmitting notification and the receiving notification. For some embodiments, the transmission and the receipt of the chirp signals may be used to direct the two users of the two mobile computing devices toward one another. It may be noted that the server computer 300 may already know the identity of the users using the two mobile computing devices based on the identity information associated with the two mobile computing devices.
  • The mobile computing device 100 is an audio/video enabled device (e.g., an iPhone). The chirp signal generating module 141 allows a user holding the mobile computing device 100 to detect and locate other users holding similar mobile computing devices within the vicinity. The audio processing module 140 may allow detection of people within the vicinity based on both mobile computing devices transmitting and receiving the chirp signals or based on using facial recognition engine 320 (see FIG. 3A).
  • For some embodiments, one audio-signal-based-distance-calculation methodology that may be used is as follows. The two mobile computing devices transmit/broadcast chirp signals to each other to work out the distance between them. A third mobile computing device can also listen and identify the two chirp signals from the other two mobile computing devices, and thereby enable the calculation of the exact position (using X-Y coordinates).
  • As discussed, the chirp signals frequencies are used to detect proximity of the two users. The two mobile computing devices broadcast the chirp signals in turn. Each mobile computing device with its microphone and/or audio receiver notes/detects the times when the chirp signals were broadcast and detected. Based on these time values, the distance between the two mobile computing devices is calculated. The audio processing module 140 of one mobile computing device is configured to calculate the distance to the other mobile computing device (or the user holding the device). Alternatively, the audio processing module 140 also allows the calculation of the exact position (exact distance and direction) of the other person, when a third observing mobile computing device (placed at a predetermined position) is employed. The audio processing module 140 is configured to triangulate the positions of all three mobile computing devices. The audio processing module 140 then will generate approximate direction of the other mobile computing device by text indicating direction and distance.
  • The audio processing module 140 may insert an arrow in the video stream being played on the mobile computing device. The arrow may indicate the direction that the user of the mobile computing device should walk to get to the other person. The direction information may overlay the video stream being viewed on the display screen. As discussed earlier, an alternative implementation may use notification signals from both mobile computing devices and communicated to the server computer 300 to determine distance between the mobile computing devices when the use of the facial recognition operations may not be possible. The notification may be generated by the audio processing module 140 to enable the users to identify and locate the other mobile computing devices or users within the same vicinity.
  • The audio processing module 140 may include coded algorithms that enable generating chirping pattern at a set audio frequencies and detecting the chirp signals. The algorithms also enable determining distance from the current mobile computing device to the mobile computing device that transmits or broadcasts the detected chirp signals. Algorithms are also employed to minimize the distance calculation errors due to acoustic echo paths. Rather than generating a high frequency/low frequency signals beyond the capabilities/range of operation of a standard mobile computing device's speaker system and microphone system to avoid background noise, the chirp signals may be a series of high and low frequency bursts within the standard range of both the microphone and speaker system but that burst sequence at those frequencies does not happen naturally in nature. The audio processing module 140 has signal processing filters to look for specifically that pattern in those frequencies to identify both when a chirp signal is detected and what the distance is between the two mobile computing devices.
  • For some embodiments, as the video scene is being captured by the mobile computing device 100, the video stream is transmitted to the server computer 300 and analyzed by the server computer 300 for facial recognition. Alternatively, the identity of the desired user is transmitted to the server computer 300 and the images and different views are transmitted to the mobile computing device. Thus, the server computer 300 stores the photo's for facial recognition in the facial recognition database 340 and transmits to the mobile computing device 100 the facial recognition image/set of images front side, right side and left side profile to be matched with by the video processing module 135 making the facial recognition faster and easier by the video processing module 135 of the mobile computing device 100. It may be noted that, one or more types of rapid facial recognition software that looks at features such as skin tone, facial features such as eyes may be incorporated into the video processing module 135.
  • This process may be useful in large crowded public places such as in bar, sports arena or theme park, first time meet and greets, etc. The integration of audio based distance calculation and scene analysis allows the creation of dynamically formed mobile communities. The system creates mobile communities automatically, enabling users to connect to people with similar interests they would otherwise never have met. A user in the vicinity of someone with a similar profile will be alerted and given the directions to meet another user.
  • Although not shown, the mobile computing device 100 may include a power source (e.g., a battery), a subscriber identity module (SIM), a keyboard (although soft keyboard may be implemented), input/output interfaces (e.g., video, audio ports), external power connector, external memory connectors, an antenna, a speaker, etc. It should be noted that, although the mobile computing device 100 is used in the examples herein, non-mobile devices having similar features may also be used to transmit the visual information and to receive the augment information.
  • Network Environment with the Mobile Computing Devices and the Server Computers
  • FIG. 2 illustrates an example of a network that may be used to augment a captured video stream, in accordance with some embodiments. Network 200 may be an Internet. Multiple server computers 205A-205C and multiple mobile computing devices 210A-210D may be connected to the network 200. Each of the server computers 205A-205C may be associated with a database 206A-206C, respectively. The mobile computing devices 210A-210D may be referred to as the mobile computing devices. The network environment illustrated in this example may be referred to as the client-server environment. The client-server relationship allows the operations of the mobile computing device 205A-205C to be triggered anywhere in the world and to augment any captured video stream with useful information enhancing the user's view of the real world. It should be noted that the number of mobile computing devices, server computers, and databases illustrated in this example is for illustration purpose only and is not meant to be restrictive. It is within the scope of embodiments of the present invention that there may be many server computers and databases worldwide to serve many more mobile computing devices.
  • The mobile computing devices 210A-210D may include features similar to the mobile computing device 100 described in FIG. 1. The server computers 205A-205C may include communication modules and associated applications that allow them to be connected to the network 200 and to exchange information with the mobile computing devices 210A-210D. For example, a user using the mobile computing device 210A may interact with web pages that contain embedded applications, and then supply input to the query/fields and/or service presented by a user interface associated with the applications. The web pages may be served by the server computer 205A on the Hyper Text Markup Language (HTML) or wireless access protocol (WAP) enabled mobile computing device 205A or any equivalent thereof. The mobile computing device 205A may include browser software (e.g., Internet Explorer, Firefox) to access the web pages served by the server computer 205A.
  • Server Computer and Selection of the Augment Information
  • FIG. 3A illustrates an example of a server computer that may be used to determine augment information for use with a captured video stream, in accordance with some embodiments. Server computer 300 may include communication module (not shown) to allow it to be connected to a network such as the network 200 illustrated in FIG. 2. The server computer 300 may also include server applications that allow it to communicate with one or more mobile computing devices including, for example, the mobile computing device 100. Communication sessions may be established between the server computer 300 and the mobile computing device 100 to enable the receipt of the visual information 306 from the mobile computing device 100 and the transmission of the augment information 391 to the mobile computing device 100. For some embodiments, the server computer 300 may be coupled with object database 330, facial recognition database 340, and augment information database 350.
  • As discussed, the client module uses an extraction algorithm to identify the features of the points of interest in that frame, extracts those features along with data such as geographical information, compass direction, and other relevant information, and transmits that packet of information about that frame up to the IDOL server. The IDOL server has the knowledge base and distributed computing power to identify the point of interest. The IDOL server can analyze the series of frames coming in the video stream, and use this information to match the transmitted features of the points of interest to known objects or images in the database. At approximately at the same time as the object recognition engine 310 is hierarchically filtering or narrowing down the possible known matching images/object to the transmitted features, the augment engine 325 is preparing and selecting augment information to be transmitted back to the video processing module 135 on the mobile computing device 100 for display.
  • The augment engine 325 has a database of prepared augment information (e.g., video files, advertisements, links, etc.) to overlay onto known points of interest in the frames. The augment engine 325 narrows down the possible overlay to add into the video file based on potentially what is relevant to that user. The augment engine 325 can start transmitting to the mobile computing device 100 the potential large files such as video files, and advertisements while the object recognition engine 310 determines what object is. Otherwise the augment engine 325 can start transmitting the video files, and advertisements and images, textual messages, links to relevant web pages, etc. after the point of interest is identified. The video processing module 135 then overlays the augment information onto the frames of the video stream. The augment information may be a textual message or highlights of the points of interest. The user can choose to activate the highlighted point of interest to view the augment information associated with the frames of the video file being displayed on the display screen 106 of the mobile computing device 100.
  • The object database 330 may be configured to store information about a group of known objects. The information may describe the different characteristics of the known objects. This may include geographical information, color information, pattern information, and so on. In general, the characteristics of the object may include any information about the object that may be useful to identify the object and recognize it as a known object. For example, an office building located on the corner of Fourth Street and Broadway Avenue in downtown San Francisco may be identified based on its unique pyramid shape architecture and orange color. It may be noted that the object database 330 may be a large database when it is configured to store information about many objects or many groups of objects. Many techniques may be used to generate the information about the objects. For example, the information may be generated by human, or it may be generated by a special computer application coded to scan a color image and generate a list of objects included in the image along with their characteristics.
  • For some embodiments, the facial recognition database 340 may store facial recognition information for a group of known people. The facial recognition information for each person in the group may have previously been generated and stored in the facial recognition database 340. The facial recognition database 340 may be a large database when it is configured to store facial recognition information for many people. Many techniques may be used to generate and store the facial recognition information. For example, a person use a facial recognition application to generate own facial recognition information and request to have it stored in the facial recognition database 340.
  • For some embodiments, the augment information database 340 may be configured to store information that may be inserted into the captured video stream 305. The information may include identification information (e.g., the university), advertisement information (e.g., restaurant discount coupons), link information (e.g., a URL link to the website of a restaurant), facial information (e.g., Bob Smith), etc. Different types of augment information may be stored for the same object. For some embodiments, the server computer 300 may include an object recognition engine 310, a facial recognition engine 320, and an augment engine 325.
  • The object recognition engine 310 may be configured to receive the characteristics of the objects from the mobile computing device 100. The object recognition engine 310 can be configured to take advantage of distributed workload computing across multiple servers to increase the speed of filtering out known images stored in the object database 330 compared to the characteristics information transmitted by the video processing module 135. The object recognition engine 310 may use the geographical information included in the frames of the captured video stream 305 and the information stored in the object database 330 to recognize the objects. For example, the yellow building with the pyramid shape located at latitude coordinate X and longitude coordinate Y may be recognized as the National Financial Building. For some embodiments, the object recognition engine 310 may use a set of filters and apply the filters to the characteristics or visual information received from the mobile computing device 100 to determine whether it can recognize what the object or who the person is. Since the captured video stream 305 is comprised of a series of closely related frames both in time and in approximate location, the frames generally include the same objects and/or persons and the characteristics/visual information may have the same pattern of identified major features of the object (or the points of interest). This may help the object recognition engine 310 to narrow down the matching options that are available in the object database 330. For example, the object recognition engine 310 may recognize the distinctive features for the point of interest as a billboard or poster for a movie, a restaurant such as McDonalds, a building such as an office, historic landmark, residence, etc.
  • The facial recognition engine 320 may be configured to receive the facial characteristics of the persons from the mobile computing device 100. The facial recognition engine 320 may use the geographical information included in the frames of the captured video stream 305 and the information stored in the facial recognition database 340 to identify and recognize the persons. For some embodiments, the facial recognition engine 320 may also use the geographical information included in the frames to identify a location of the recognized person for direction purposes.
  • The augment engine 325 may be configured to receive the results from the object recognition engine 310 and/or the facial recognition engine 320 to determine how to select the proper augment information to be transmitted to the mobile computing device 100 to augment the identified object in the original video file, and select that augment information 391 from the augment information database 350. The augment information 391 may be related to the objects or persons that have been recognized by the object recognition engine 310 and/or the facial recognition engine 320. In general, the augment information 391 may include any information that may provide in-depth information or content about the objects and/or persons included in the frames of the captured video stream 305. For example, the augment information 391 may include listing of food establishments in various buildings, links to user reviews for a particular business, links to web pages, etc. The augment engine 325 may select the augment information that is most relevant to the user. For example, the object may be an office building with many different businesses, and the object database 330 may include augment information associated with each of the businesses. However, only the augment information associated with an art gallery may be selected because the profile of the user or the operator of the mobile computing device 100 may indicate that the user is only interested in modern arts.
  • The selected augment information 391 may then be transmitted to the mobile computing device 100 and used by the video processing module 135 to generate the augmented video stream 390. The augmented video stream 390 may then be viewed by the user or used by any other applications that may exist on the mobile computing device 100. It is within the scope of the embodiments of the invention that the operations of capturing the video stream, processing the captured video stream, recognizing object and/or persons in the captured video stream, augmenting the captured video stream, and presenting the augmented video stream to the user or the other applications occur in real time. For example, the user may capture a video stream 305 and almost instantaneously see the augmented video stream 390 displayed on the display screen 106 of the mobile computing device 100.
  • For some embodiments, the augment information may include graphical information and/or audio information. The graphical augment information may overlay the frames of the captured video stream 305. The audio augment information may be audible through the speaker 142 of the mobile computing device 100. Thus, the video processing module 135 on the mobile computing device 100 identifies major features of one or more points of interest within each frame of a video stream captured by the video camera 120, transmits those identified points of interest to the server computer 300, and displays the augment information overlaying the original captured video stream on the display screen 106 and/or output the audio portion of the augment information with the original captured video stream through the speakers 142 of the mobile computing device 100.
  • For some embodiments, the augment engine 325 may start transmitting potentially large augment information 391 (e.g., video files, advertisements, images, etc.) while the object recognition engine 310 and/or the facial recognition engine 320 are identifying the objects. Otherwise, the augment engine 325 may start transmitting the augment information 391 after the points of interest and the objects are identified. The video processing module 135 may then overlay the augment information onto the video stream. For some embodiments, the user may have the option to view the captured video stream as is, or the user may select to view the corresponding augmented video stream.
  • For some embodiments, the server computer 300 may be implemented as an Intelligent Data Operating Layer (IDOL) server using the IDOL software product and associated system of Autonomy Corporation of San Francisco, Calif. The IDOL server collects indexed data from connectors from various sources to train the engines and stores it in its proprietary structure, optimized for fast processing and retrieval of data. As the information processing layer, IDOL forms a conceptual and contextual understanding of all content in an enterprise, automatically analyzing any piece of information from over thousands of different content formats and even people's interests. Hundreds of operations can be performed on digital content by IDOL, including hyperlinking, agents, summarization, taxonomy generation, clustering, eduction, profiling, alerting and retrieval. The IDOL Server has the knowledge base and interrelates the feature pattern being transmitted by the video processing module 135. An example of the modules included in the IDOL server is illustrated in FIG. 7.
  • The IDOL server enables organizations to benefit from automation without losing manual control. This complementary approach allows automatic processing to be combined with a variety of human controllable overrides, offering the best of both worlds and never requiring an “either/or” choice. The IDOL server integrates with all known legacy systems, eliminating the need for organizations to cobble together multiple systems to support their disparate component.
  • The IDOL sever may be associated with an IDOL connector which is capable of connecting to hundreds of content repositories and supporting over thousands of file formats. This provides the ability to aggregate and index any form of structured, semi-structured and unstructured data into a single index, regardless of where the file resides. The extensive set of connectors enables a single point of search for all enterprise information (including rich media), saving organizations much time and money. With access to virtually every piece of content, IDOL provides a 360 degree view of an organization's data assets.
  • The IDOL servers implement a conceptual technology is context-aware and uses deep audio and video indexing techniques to find the most relevant products, including music, games and videos. The IDOL servers categorize content automatically to offer intuitive navigation without manual input. The IDOL servers also generate links to conceptually similar content without the user having to search. The IDOL servers may be trained with free-text descriptions and sample images such as a snapshot of a product. A business console presents live metrics on query patterns, popularity, and click-through, allowing the operators to configure the environment, set-up promotions and adjust relevance in response to changing demand.
  • For some embodiments, the video processing module 135 of the mobile computing device 100 may identify the characteristics of the objects and/or persons and then causes that information to be transmitted to an IDOL server in real time. Thus, it is possible that while the augment engine 325 of the server computer 300 performing its operations for a first set of frames, the video processing module 135 of the mobile computing device 100 may be performing its operations for a second set of frames, and a third set of frames along with the associated augment information may be displayed on the display screen 106.
  • User Profile Information and Selection of Relevant Augment Information
  • FIG. 3B illustrates an example of a server computer that may be used to determine augment information for use with a captured video stream, in accordance with some embodiments. The components included in the server computer 300 may be in addition to the components illustrated in FIG. 3A. This includes the user profile engine 328 and user profile database 360. The server computer 300 may augment identified points of interest within each frame of a video stream with augment information on those points of interest that is more relevant to the user of the specific mobile computing device hosting the video processing module 135 by maintaining a user profile.
  • For some embodiments, the system described herein augments each identified points of interest within each frame of a video stream with the augment information (graphical or audio information) on those points of interest that is more relevant to the user of the specific mobile computing device hosting the video processing application 135. The types of augment information that can be supplied are stored in the augment information database 350. The server computer 300 uses the mobile computing device's user-specific information in the process of selecting the augment information to be used with the video stream.
  • The video processing module 135 captures the user's habits when the user uses mobile computing device 100. For example, the user's habit may be captured when the user is capturing a video stream, browsing the Internet, dialing phone numbers, etc. The information may include phone numbers typically called, websites frequency visited, types of products purchased, user's age and gender, home city and address information, etc. The use of user-specific information, as well as the ability to automatically update and refine the information over time, are essential for accurate delivery and targeting of the augment information and differentiate the technique from all predecessors.
  • The video processing module 135 transmits a combination of the features of the points of interest visual information to the server computer 300, along with a user's individual profile, and a number of additional pieces of information to the server computer 300. The server computer 300 then determines the augment information for the frames of the video stream 305 with information of specific relevance to that user at that position and time. The user-specific's aspects can automatically train and update a user profile of that user which allows the delivery of more pertinent information. As each user utilizes the system of augmenting the video stream, the information on his usage is used to build a “profile” to represent his interests, demographics, and/or specific patterns of use. Subsequently, the user's mobile computing device 100 can be deployed to collect information and the video stream from the video camera and transmit the collected information to the server computer 300. This is used to determine the most pertinent augmentations that can be made to the system for that user at that specific time, and augment the video stream 305 with additional visual or audiovisual objects or images.
  • The user profile database 360 is maintained to represent each user's interests, demographics, and/or specific patterns of use, which can be referenced by the user profile engine 328 and the augment engine 325 when determining what type of augment information to augment a point of interest in the frame of the captured video stream on the mobile computing device 100. The augment engine 325 may have a set of for example, twenty or more, different ways to augment points of interest whether general augment information that applies to a category of known objects such as a chain restaurant or specific-content augment information that applies to only to the known object as well as different subject matter in the augment information from advertisements to historical points of interest, links to relevant web pages, overlays of street addresses, phone numbers, list of shops in a building, to enhancements such as animations created to enhance that object.
  • The user profile engine 328 assists the augment engine 325 in determining which augment information to select and transmit to the mobile computing device 100 to be added to the frames of the video stream being captured by the mobile computing device 100. In an embodiment, the IDOL server system may automatically profile the way the users interact with each other and with information on their mobile computing devices, build a conceptual understanding of their interests and location to deliver tailored commercial content. The IDOL server provides automatic notification as soon as new tracks and relevant products are released, or location-specific information such as traffic reports and up-to-the-minute news, without the user having to search.
  • Server Mirroring and Distributed Processing
  • FIG. 4 illustrates an example of a network diagram with mirrored servers that may be used to filter information received from the mobile computing devices, in accordance with some embodiments. Server computers 405M, 405A, 405B, and 405C connected to the network 200 may be configured as IDOL servers. The IDOL servers may include a main IDOL server 405M and multiple mirrored IDOL servers 405A-405C. The main IDOL server 405M may mirror its information onto the mirrored IDOL servers 405A-405C. The mirroring may include mirroring the content of the main IDOL server database 406M into the mirrored IDOL sever databases 406A-406C. For example, the object database 300, the facial recognition database 340, and the augment information database 350 may be mirrored across all of the mirrored IDOL servers 405A-405C. The main IDOL server 405M and the mirrored IDOL servers 405A-405C may be located or distributed in various geographical locations to serve the mobile computing devices in these areas. For example, the main IDOL server 405M may be located in Paris, the mirrored IDOL server 405A may be located in Boston, 405B in Philadelphia, and 405C in New York.
  • Each of the IDOL servers illustrated in FIG. 4 may include its own object recognition engine 310, facial recognition engine 320, and augment engine 325. The distribution of servers within a given location helps to improve the identification and augmentation response time. The mirroring of identical server site locations also helps to improve the identification and augmentation response time. However, in addition mirroring of identical server site locations aids in servicing potentially millions of mobile computing devices with the video application resident all submitting packets with distinguishing features for the points of interest by distributing the workload and limiting the physical transmission distance and associated time. The IDOL server set being duplicated with the same content and mirrored across the Internet to distribute this load to multiple identical sites to increase both response time and handle the capacity of the queries by those mobile computing devices.
  • For some embodiments, the video processing module 135 may include a coded block to call up and establish a persistent secure communication channel with a nearest non-overloaded mirrored site of the main IDOL server when the mobile computing device 100 is used to capture a video stream. For example, the mobile computing device 410A may be connected with the IDOL server 405A via communication channel 450 because both are located in Boston. However, when the IDOL server 405A is overloaded, the mobile computing device 410A may be connected with the IDOL server 405C in New York because it may not be overloaded even though the IDOL server 405C may be further from the mobile computing device 410A than the IDOL server 405A.
  • For some embodiments, a set of IDOL servers may be used to filter the information received from the mobile computing devices. A hierarchical set of filters may be spread linearly across the set of IDOL servers. These IDOL servers may work together in collaboration to process the transmitted object and/or person visual information to determine or recognize what the object or who the person is. For example, when the mobile computing device 410A establishes the communication channel 450 with the IDOL server 405A, the IDOL servers 405A-405C may work together to process the information received from the mobile computing device 410A. This collaboration is illustrated by the communication channel 451 between the IDOL server 405A and 405C, and the communication channel 452 between the IDOL server 405A and 405B. Similarly, when the mobile computing device 410B establishes communication channel 454 with the IDOL server 405C, the IDOL servers 405C, 405B and 405A may work together to process the information received from the mobile computing device 410B. This collaboration is illustrated by the communication channel 451 between the IDOL server 405C and 405A, and the communication channel 453 between the IDOL server 405C and 405B.
  • Each server in the set of servers applies filters to eliminate the pattern of features received from the mobile computing device 100 as possible matches to feature sets of known objects in the object database 330. Entire categories of possible matching objects can be eliminated simultaneously, while subsets even within a single category of possible matching objects can be simultaneously solved for on different servers. Each server may hierarchically rule out potentially known images on each machine to narrow down the hierarchical branch and leaf path to a match or no match for the analyzed object of interest.
  • The mobile computing device 100 has built-in Wi-Fi circuitry, and the video stream is transmitted to an IDOL server on the Internet. The IDOL server set contains an object recognition engine 310 distributed across the IDOL server set, IDOL databases, and an augment engine 325 as well. The object recognition engine 310 distributed across the IDOL server set applies a hierarchical set of filters to the transmitted identified points of interest and their associated major within each frame of a video stream to determine what that one or more points of interest are within that frame. Since this is a video feed of a series of closely related frames both in time and in approximate location, the pattern of identified major features of points of interest within each frame of a video stream helps to narrow down the matching known object stored in the object database 330.
  • The collaboration among the IDOL servers may help speed up the recognition process. For example, each of the IDOL servers may apply filters to eliminate certain pattern of features as possible matches to features of known objects stored in the object database 330. Entire categories of objects may be eliminated simultaneously, while subsets even within a single category of objects may be simultaneously identified as potential matching objects by the collaborating IDOL servers. Each IDOL server may hierarchically rule out potential known objects to narrow down the hierarchical branch and leaf path to determine whether there is a match.
  • For some embodiments, each of the IDOL servers may match the pattern of the visually distinctive features of the points of interest in the frame to the known objects in the object database 330. The geometric shape of the features of the point of interest X-Y coordinates may come across to a human like a dot-to-dot connection illustration. When the X-Y coordinates of the dots on the grid of the paper are connected in the proper sequence, recognizing the image/object associated with those dots on the piece of paper is a simple task. This may include comparing the dot-to-dot type geometric shapes transmitted features along with their distinctive colors, recognized text, numbers and symbols, geographical information, direction information relative to the camera to the feature sets stored in the object database 330. The dot-to-dot type geometric shapes can be subset into distinctive triangles, pyramids, rectangles, cubes, circles and cylinders, etc, each with its own associated distinctive colors or patterns, to aid in the identification and recognition. Each of the IDOL servers, on a hierarchical basis, may map the collection of feature points about the points of interest to a stored pattern of feature points for known objects to match what's in the frames to the known object.
  • For some embodiments, the video processing module 135 may continuously transmit the identified features of the points of interest 306 in the frames of the captured video stream 305 while the object recognition engine 310 (distributed over a large amount of IDOL servers) and augment engine 325 transmits back the augment information to augment identified images/objects in the captured frames of the video file stored in a memory of the mobile computing device 100 when that identified object is being shown on the display in near real time (e.g., less than 5 seconds).
  • As discussed, the server computer 300 has a set of one or more databases to store a scalable database of visual information on locations such as buildings, and structures, in order to perform subsequent matching of a visual data stream to determine the building or structure that is being viewed. The server-client system addresses the problem of determining the exact location of a mobile user, and to determine exactly what the user is looking at, at any point, by matching it against a database of characteristics associated with those visual images. The system gives the ability to construct a scalable solution to the problem to identify location, regardless of position and with minimal training.
  • The system with the server computer 300 and a set of one or more databases (e.g., object database 330, facial recognition database 340, augment information database 350, user profile database 360) is trained on a set of views of the world and the models derived are stored for future retrieval. The combination of geographical information and visual characteristics allows a faster matching. Following this, the mobile computing device can be deployed to collect geospatial information and a video data stream from the camera and feed it back to the system. This is used to pinpoint the objects or locations within view and augment the video stream with additional visual or audiovisual objects or images.
  • Flow Diagrams
  • FIG. 5 illustrates an example flow diagram of a process that may execute on a mobile computing device to create an augmented video stream, in accordance with some embodiments. The process may be associated with operations that may be performed on the mobile computing device 100. The mobile computing device 100 may be capturing many frames of a video stream. As the frames are being captured, they are analyzed and characteristics information of objects in the frames is extracted, as shown in block 505. The extraction may involve the features, the geometric shape information, the distinct colors, the dot-to-dot type pattern, and other relevant information. The extraction may involve generating a pattern of X-Y coordinates of the geometric shapes of the point of interest and the color associated with the shapes, and the geographic coordinates from the GPS modules, the direction information from the direction sensor 122 associated with the video camera 121 of the mobile computing device.
  • At block 510, the characteristics information and geographical information are transmitted to a server computer (e.g., server computer 300) in a network so that the server computer can filter the information and determine the augment information. The server computer that receives the characteristics information may be one that is geographically closest to the mobile computing device 100. If this server computer is overloaded, a nearby non-overloaded server computer may be selected instead. The selected server computer may collaborate with other mirrored server computers to determine the augment information. The server computers may perform comparing and matching operations using a hierarchical approach. The server computers may find different augment information that may be used. Criteria may be used to select the appropriate augment information to transmit to the mobile computing device 100.
  • At block 515, the augment information is received from the server computer. It may be possible that while the mobile computing device 100 is receiving the augment information for a series of frames, the mobile computing device 100 is also preparing characteristics information for another series of frames to be transmitted to the server computer. In general, for each frame in the video stream, a transmission packet containing the characteristics information of the point(s) of interest is transmitted to the server computer from the mobile computing device 100.
  • At block 520, the mobile computing device 100 may use the augment information to overlay the appropriate frames of the video stream and create an augmented video stream. At bloc 525, the augmented video stream is displayed on the display screen 106 of the mobile computing device 100.
  • FIG. 6A illustrates an example flow diagram of a process that may execute on a server computer to determine augment information, in accordance with some embodiments. The operations associated with this process may be performed by many servers working collaboratively to provide the results to the mobile computing device in almost real time. The process may start at block 605 where the characteristics and geographical information are received from the mobile computing device 100. Direction information of the video camera 121 may also be received from the direction sensor 122. As mentioned earlier, the information transmitted from the mobile computing device 100 may be compressed. As such, the server may include decompression logic to decompress the information. The server may also include compression logic to compress the augment information if necessary. At block 610, the servers may perform comparing and matching or recognition operations. This may include filtering and eliminating any known objects that do not possess the same characteristics. This may include narrowing down to potential known objects that may possess the same characteristics.
  • It may be possible that there is a set of augment information for each known object, and the server may need to determine which augment information to select, as shown in block 615. At block 620, the augment information is transmitted to the mobile computing device 100. It may be possible that while the server is transmitting the augment information for a set of frames of a video stream, the server is also performing the operations in block 610 for another set of frames associated with the same video stream. It may be noted that the processes described in FIG. 5 and FIG. 6A may also be used to perform facial recognition using the facial recognition engine 320 and the facial recognition database 340.
  • FIG. 6B illustrates an example flow diagram of a process that may execute on a server computer to determine augment information based on user profile, in accordance with some embodiments. The operations associated with this process may be performed by an IDOL server and may expand on the operations described in block 615 of FIG. 6A. The process may start at block 625 where the identity of the mobile computing device 100 is verified. The identity information of the mobile computing device 100 may have been transmitted to the server computer 300 during the initial communication such as, for example, during the establishing of the communication channel between the mobile device 100 and the server computer 300. The identity information may be used by the user profile engine 328 to determine the appropriate user profile from the user profile database 360, as shown in block 630. As discussed, the user profile may have been collected as the mobile computing device 100 is used by the user over time. The user profile may include specific user-provided information. At block 635, the augment information may be selected based on the information in the user profile. This allows relevant augment information to be transmitted to the mobile computing device 100 for augmentation of the video stream 305, as shown in block 640.
  • FIG. 6C illustrates an example flow diagram of a process that may be used to determine distance based on the chirp signals generated by the mobile computing devices, in accordance with some embodiments. The process may operate after the facial recognition operations by the facial recognition engine 320 have been performed and positive recognition has occurred. The process may start at block 650 where the two mobile computing devices make initial chirp communication. At block 655, the first mobile computing device broadcasts the chirp signal a predetermined number of times (e.g., three times) and notes the clock times at which they were broadcast. At block 660, the second mobile computing device records an audio signal and detects the chip signals and their clock times. At block 665, the procedure is reversed after a few seconds of pause (e.g., five (5) seconds) when the second mobile computing device broadcasts its chirp signal for the same predetermined number of times. The second device then notes its broadcast time, and sends detection time and broadcast time to the first device. At block 670, the first mobile computing device detects the chirp signals of the second mobile computing device in its recorded audio signal. At block 675, from the first mobile computing device, a first formula is used to determine the distance between the two mobile computing devices based on the measured clock times.
  • At block 680, a third mobile computing device listening to the two chirp signal broadcasts by the first and second mobile computing devices also detects them in its recorded audio signal and reports the times to the first mobile computing device. The third mobile computing device may be placed in a pre-determined location. At block 685, from the first mobile computing device, a second formula is used to calculate the position (x,y) of the second mobile computing device with respect to itself and the third mobile computing device and triangulates the position and distance among all three mobile computing devices. At block 690, the video processing module 135 of the first and second mobile computing devices then overlays arrows or footsteps on the video stream being displayed on each respective display screen to indicate which direction each user of the first and second mobile computing device should proceed in to meet up. Thus, a combination of scene analysis, facial recognition, and subsequence audio signal is used to detect and determine a spatially-accurate location of one or more mobile computing devices
  • Intelligent Data Operating Layer (IDOL) Server
  • FIG. 7 illustrates an example block diagram of some modules of an IDOL server, in accordance with some embodiments. IDOL server 700 may include automatic hyperlinking module 705, automatic categorization module 710, automatic query guidance module 715, automatic taxonomy generation module 720, profiling module 725, automatic clustering module 730, and conceptual retrieval module 735. The automatic hyperlinking module 705 is configured to allow manual and fully automatic linking between related pieces of information. The hyperlinks are generated in real-time at the moment the document is viewed. The automatic categorization module 710 is configured to allow deriving precise categories through concepts found within unstructured text, ensuring that all data is classified in the correct context.
  • The automatic query guidance module 715 is configured to provide query suggestions to find most relevant information. It identifies the different meanings of a term by dynamically clustering the results into their most relevant groupings. The automatic taxonomy generation module 720 is configured to automatically generate taxonomies and instantly organizes the data into a familiar child/parent taxonomical structure. It identifies, names and creates each node based on an understanding of the concepts with the data set as a whole. The profiling module 725 is configured to accurately understand individual's interests based on their browsing, content consumption and content contribution. It generates a multifaceted conceptual profile of each user based on both explicit and implicit profiles.
  • The automatic clustering module 730 is configured to help analyze large sets of documents and user profiles and automatically identify inherent themes or information clusters. It even cluster unstructured content exchanged in emails, telephone conversations and instant messages. The conceptual retrieval module 735 is configured to recognize patterns using a scalable technology that recognizes concepts and find information based on words that may not be located in the documents. It should be noted that the IDOL server 700 may also include other modules and features that enable it to work with the mobile computing device 100 to generate the augmented video stream as described herein. As described above, one or more of the modules of the IDOL server 700 may be used to implement the functionalities of the object recognition engine 310, the facial recognition engine 320, the augment engine 325, and the user profile engine 328.
  • Computer System
  • FIG. 8 illustrates an example computer system that may be used to implement an augmented video stream, in accordance with some embodiments. Computing environment 802 is only one example of a suitable computing environment and is not intended to suggest any limitations as to the scope of use or functionality of the embodiments of the present invention. Neither should the computing environment 802 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in FIG. 8.
  • Embodiments of the invention may be operational with general purpose or special purpose computer systems or configurations. Examples of well-known computer systems that may be used include, but are not limited to, personal computers, server computers, hand-held or laptop devices, Tablets, Smart phones, Netbooks, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • Embodiments of the present invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer system. Generally, program modules include routines, programs, databases, objects, components, data structures, etc. that performs particular tasks or implement particular abstract data types. Those skilled in the art can implement the description and/or figures herein as computer-executable instructions, which can be embodied on any form of computer readable media discussed below.
  • Embodiments of the present invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
  • Referring to FIG. 8, the computing environment 802 includes a general-purpose computer system 810. Components of the computer system 810 may include, but are not limited to, a processing unit 820 having one or more processing cores, a system memory 830, and a system bus 821 that couples various system components including the system memory to the processing unit 820. The system bus 821 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) locale bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • Computer system 810 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer system 810 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable mediums uses include storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage mediums include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer system 810. Communication media typically embodies computer readable instructions, data structures, program modules or other transport mechanism and includes any information delivery media.
  • The system memory 830 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 831 and random access memory (RAM) 832. A basic input/output system 833 (BIOS), containing the basic routines that help to transfer information between elements within computer system 810, such as during start-up, is typically stored in ROM 831. RAM 832 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 820. By way of example, and not limitation, FIG. 8 illustrates operating system 834, application programs 835, other program modules 836, and program data 837.
  • The computer system 810 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only, FIG. 8 illustrates a hard disk drive 841 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 851 that reads from or writes to a removable, nonvolatile magnetic disk 852, and an optical disk drive 855 that reads from or writes to a removable, nonvolatile optical disk 856 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, USB drives and devices, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 841 is typically connected to the system bus 821 through a non-removable memory interface such as interface 840, and magnetic disk drive 851 and optical disk drive 855 are typically connected to the system bus 821 by a removable memory interface, such as interface 850.
  • The drives and their associated computer storage media discussed above and illustrated in FIG. 8, provide storage of computer readable instructions, data structures, program modules and other data for the computer system 810. In FIG. 8, for example, hard disk drive 841 is illustrated as storing operating system 844, application programs 845, other program modules 846, and program data 847. Note that these components can either be the same as or different from operating system 834, application programs 835, other program modules 836, and program data 837. The operating system 844, the application programs 845, the other program modules 846, and the program data 847 are given different numeric identification here to illustrate that, at a minimum, they are different copies.
  • A participant may enter commands and information into the computer system 810 through input devices such as a keyboard 862, a microphone 863, and a pointing device 861, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, scanner, or the like. These and other input devices are often connected to the processing unit 820 through a user input interface 860 that is coupled with the system bus 821, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 891 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 890. In addition to the monitor, computers may also include other peripheral output devices such as speakers 897 and printer 896, which may be connected through an output peripheral interface 890.
  • The computer system 810 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 880. The remote computer 880 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer system 810. The logical connections depicted in FIG. 8 include a local area network (LAN) 871 and a wide area network (WAN) 873, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • When used in a LAN networking environment, the computer system 810 is connected to the LAN 871 through a network interface or adapter 870. When used in a WAN networking environment, the computer system 810 typically includes a modem 872 or other means for establishing communications over the WAN 873, such as the Internet. The modem 872, which may be internal or external, may be connected to the system bus 821 via the user-input interface 860, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer system 810, or portions thereof, may be stored in a remote memory storage device. By way of example, and not limitation, FIG. 8 illustrates remote application programs 885 as residing on remote computer 880. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • It should be noted that some embodiments of the present invention may be carried out on a computer system such as that described with respect to FIG. 8. However, some embodiments of the present invention may be carried out on a server, a computer devoted to message handling, handheld devices, or on a distributed system in which different portions of the present design may be carried out on different parts of the distributed computing system.
  • Another device that may be coupled with the system bus 821 is a power supply such as a battery or a Direct Current (DC) power supply) and Alternating Current (AC) adapter circuit. The DC power supply may be a battery, a fuel cell, or similar DC power source needs to be recharged on a periodic basis. The communication module (or modem) 872 may employ a Wireless Application Protocol (WAP) to establish a wireless communication channel. The communication module 872 may implement a wireless networking standard such as Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, IEEE std. 802.11-1999, published by IEEE in 1999.
  • Examples of mobile computing devices may be a laptop computer, a tablet computer, Netbook, cell phone, a personal digital assistant, or other similar device with on board processing power and wireless communications ability that is powered by a Direct Current (DC) power source that supplies DC voltage to the mobile computing device and that is solely within the mobile computing device and needs to be recharged on a periodic basis, such as a fuel cell or a battery.
  • Although embodiments of this invention have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of embodiments of this invention as defined by the appended claims. For example, specific examples are provided for shapes and materials; however, embodiments include those variations obvious to a person skilled in the art, such as changing a shape or combining materials together. Further, while some specific embodiments of the invention have been shown the invention is not to be limited to these embodiments. For example, several specific modules have been shown. Each module performs a few specific functions. However, all of these functions could be grouped into one module or even broken down further into scores of modules. Most functions performed by electronic hardware components may be duplicated by software emulation and vice versa. The invention is to be understood as not limited by the specific embodiments described herein, but only by scope of the appended claims.

Claims (20)

1. A client mobile computing system, comprising:
a processor, a memory, a built in battery to power the mobile computing device, built-in video camera and display screen for the mobile computing device, and built-in Wi-Fi circuitry to wirelessly communicate with a server computer connected to network;
a video capturing module coupled with the processor and configured to capture a video stream;
a global positioning system (GPS) module coupled with the video capturing module and configured to generate geographical information associated with frames of the video stream to be captured by the video capturing module;
a video processing module coupled with the video capturing module and configured to analyze the frames of the video stream and extract features of points of interest included in the frames, the video processing module further configured to cause transmission of the features of the points of interest and the geographical information to the server computer and to receive of augment information from the server computer, wherein the video processing module is configured to 1) overlay, 2) highlight, or 3) combination of both the points of interests in the frames of the video stream with the augment information to generate an augmented video stream; and
a display module coupled with the video processing module and configured to display the augmented video stream.
2. The system of claim 1, wherein the features of the points of interest comprises geometrical shapes associated with the points of interest and color associated with each of the geometrical shapes, and wherein each of the geometrical shapes is associated with a pattern of X-Y coordinates.
3. The system of claim 2, further comprising a direction sensor coupled with the video camera and configured to sense direction information of the video camera, wherein the video processing module is further configured to cause transmission of the direction information of the camera to the server computer, wherein the features of the points of interest are transmitted as texts.
4. The system of claim 3, wherein the video processing module is further configured to perform compression operations prior to causing the transmission of the features of the points of interest, the geographical information, and the direction information of the video camera to the server computer.
5. The system of claim 4 wherein the compression operations use DivX compression technology.
6. The system of claim 5, wherein the video processing module is further configured to select the server computer among multiple server computers based on the location and workload of the server computer.
7. The system of claim 6, wherein the augment information comprises at least one of audio information, textual information, and video information.
8. The system of claim 6, wherein the features of the points of interest are associated with a person or an object included in the frames of the video stream.
9. The system of claim 1, further comprising an audio processing module coupled with the video processing module and configured to generate and detect chirp signals, and wherein the chirp signals are used to generate direction augment information to direct a first user toward a position of a second user based on a chirp signal associated with a mobile computing device of the second user.
10. A computer-implemented method for generating augmented video streams, the method comprising:
identifying characteristics information of points of interest included in frames of a video stream being captured;
transmitting the characteristics information and geographical information associated with the frames to a server computer connected to a network using wireless communication;
receiving augment information from the server computer, the augment information related to the points of interest included in the frames of the video stream;
overlaying the augment information onto the frames of the video stream to generate an augmented video stream; and
enabling the augmented video stream to be viewable on a display screen.
11. The method of claim 10, wherein the characteristics information is transmitted as texts.
12. The method of claim 10, further comprising performing compression using DivX compression technology prior to transmitting the characteristics information to the server computer.
13. The method of claim 10, wherein the characteristics information of the points of interest includes geometrical shapes and associated color information, and wherein each of the geometrical shapes is associated with a pattern of X-Y coordinates.
14. The method of claim 10 wherein the server computer is selected from a group of servers in the network, and wherein server selection criteria include server proximity and server workload.
15. The method of claim 10, further comprising transmitting video camera direction information to the server computer, wherein the video camera direction information is associated with a video camera used to capture the video stream, and wherein overlaying the augment information onto the frames of the video stream comprises outputting an audio portion of the augment information to a speaker.
16. The method of claim 10, further comprising:
exchanging chirp signals with another mobile computing device to determine a distance from a location of a current mobile computing device to the other mobile computing device;
receiving direction augment information from the server computer; and
presenting the direction augment information on the display screen of the current mobile computing device, wherein the direction augment information overlays a video stream being played on the display screen.
17. A computer-readable media that stores instructions, which when executed by a machine, cause the machine to perform operations comprising:
detecting that a video stream is being captured by a video camera;
identifying features of objects included a central area of frames of the video stream,
generating geometrical shapes of the features of the objects and patterns associated with
each of the geometrical shapes;
causing transmission of the patterns of the geometrical shapes, color of each geometrical shape, geographical information associated with the frames, and direction information of a video camera to a server computer connected to a network using wireless communication;
receiving augment information from the server computer, the augment information related to the objects included in the frames of the video stream; and
overlaying the augment information onto the frames of the video stream.
18. The computer-readable medium of claim 17, wherein the server computer is selected based on location and workload, and wherein a set of mirrored server computers collaborate to determine the augment information, and wherein overlaying the augment information comprises sending an audio portion of the augment information to a speaker.
19. The computer-readable medium of claim 17, wherein the features of the objects are associated with a person, and wherein the augment information received from the server computer comprises information about the person.
20. The computer-readable medium of claim 19, further comprising exchanging chirp signals with another mobile computing device to determine distance from a current mobile device to the other mobile device, and wherein based on said exchanging of the chirp signals the augment information includes direction augment information.
US13/023,463 2011-02-08 2011-02-08 System to augment a visual data stream based on a combination of geographical and visual information Active 2031-07-13 US8488011B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US13/023,463 US8488011B2 (en) 2011-02-08 2011-02-08 System to augment a visual data stream based on a combination of geographical and visual information
PCT/US2012/024063 WO2012109182A1 (en) 2011-02-08 2012-02-07 A system to augment a visual data stream based on geographical and visual information
EP12744744.9A EP2673766B1 (en) 2011-02-08 2012-02-07 A system to augment a visual data stream based on geographical and visual information
CN201280008162.9A CN103635954B (en) 2011-02-08 2012-02-07 Strengthen the system of viewdata stream based on geographical and visual information
US13/940,069 US8953054B2 (en) 2011-02-08 2013-07-11 System to augment a visual data stream based on a combination of geographical and visual information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/023,463 US8488011B2 (en) 2011-02-08 2011-02-08 System to augment a visual data stream based on a combination of geographical and visual information

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/940,069 Continuation US8953054B2 (en) 2011-02-08 2013-07-11 System to augment a visual data stream based on a combination of geographical and visual information

Publications (2)

Publication Number Publication Date
US20120200743A1 true US20120200743A1 (en) 2012-08-09
US8488011B2 US8488011B2 (en) 2013-07-16

Family

ID=46600416

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/023,463 Active 2031-07-13 US8488011B2 (en) 2011-02-08 2011-02-08 System to augment a visual data stream based on a combination of geographical and visual information
US13/940,069 Active US8953054B2 (en) 2011-02-08 2013-07-11 System to augment a visual data stream based on a combination of geographical and visual information

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/940,069 Active US8953054B2 (en) 2011-02-08 2013-07-11 System to augment a visual data stream based on a combination of geographical and visual information

Country Status (4)

Country Link
US (2) US8488011B2 (en)
EP (1) EP2673766B1 (en)
CN (1) CN103635954B (en)
WO (1) WO2012109182A1 (en)

Cited By (169)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130031202A1 (en) * 2011-07-26 2013-01-31 Mick Jason L Using Augmented Reality To Create An Interface For Datacenter And Systems Management
US20130051687A1 (en) * 2011-08-25 2013-02-28 Canon Kabushiki Kaisha Image processing system and image processing method
US20130135464A1 (en) * 2011-11-29 2013-05-30 Canon Kabushiki Kaisha Imaging apparatus, display method, and storage medium
US20140095505A1 (en) * 2012-10-01 2014-04-03 Longsand Limited Performance and scalability in an intelligent data operating layer system
US20140125870A1 (en) * 2012-11-05 2014-05-08 Exelis Inc. Image Display Utilizing Programmable and Multipurpose Processors
CN103957253A (en) * 2014-04-29 2014-07-30 天脉聚源(北京)传媒科技有限公司 Method and device cloud for cloud management
US20140298383A1 (en) * 2013-03-29 2014-10-02 Intellectual Discovery Co., Ltd. Server and method for transmitting personalized augmented reality object
US20150040074A1 (en) * 2011-08-18 2015-02-05 Layar B.V. Methods and systems for enabling creation of augmented reality content
US20150124106A1 (en) * 2013-11-05 2015-05-07 Sony Computer Entertainment Inc. Terminal apparatus, additional information managing apparatus, additional information managing method, and program
US20150172394A1 (en) * 2012-02-29 2015-06-18 Google Inc. System and method for requesting an updated user location
US9083770B1 (en) 2013-11-26 2015-07-14 Snapchat, Inc. Method and system for integrating real time communication features in applications
US9094137B1 (en) 2014-06-13 2015-07-28 Snapchat, Inc. Priority based placement of messages in a geo-location based event gallery
US20150215585A1 (en) * 2014-01-30 2015-07-30 Google Inc. System and method for providing live imagery associated with map locations
US20150242688A1 (en) * 2012-09-12 2015-08-27 2MEE Ltd. Augmented reality apparatus and method
WO2015147760A1 (en) * 2014-03-24 2015-10-01 Varga Oliver Live transmission of video with parameters and device therefor
US9225897B1 (en) * 2014-07-07 2015-12-29 Snapchat, Inc. Apparatus and method for supplying content aware photo filters
US9237202B1 (en) 2014-03-07 2016-01-12 Snapchat, Inc. Content delivery network for ephemeral objects
US9276886B1 (en) 2014-05-09 2016-03-01 Snapchat, Inc. Apparatus and method for dynamically configuring application component tiles
US20160150187A1 (en) * 2013-07-09 2016-05-26 Alcatel Lucent A method for generating an immersive video of a plurality of persons
US9385983B1 (en) 2014-12-19 2016-07-05 Snapchat, Inc. Gallery of messages from individuals with a shared interest
US9396354B1 (en) 2014-05-28 2016-07-19 Snapchat, Inc. Apparatus and method for automated privacy protection in distributed images
EP3110162A1 (en) * 2015-06-25 2016-12-28 STMicroelectronics International N.V. Enhanced augmented reality multimedia system
US9537811B2 (en) 2014-10-02 2017-01-03 Snap Inc. Ephemeral gallery of ephemeral messages
US20170116644A1 (en) * 2011-11-15 2017-04-27 Excalibur Ip, Llc Providing advertisements in an augmented reality environment
US9685000B1 (en) * 2011-09-28 2017-06-20 EMC IP Holding Company LLC Using augmented reality in data storage management
US9705831B2 (en) 2013-05-30 2017-07-11 Snap Inc. Apparatus and method for maintaining a message thread with opt-in permanence for entries
US9721394B2 (en) 2012-08-22 2017-08-01 Snaps Media, Inc. Augmented reality virtual content platform apparatuses, methods and systems
US9742713B2 (en) 2013-05-30 2017-08-22 Snap Inc. Apparatus and method for maintaining a message thread with opt-in permanence for entries
US9843720B1 (en) 2014-11-12 2017-12-12 Snap Inc. User interface for accessing media at a geographic location
US9854219B2 (en) 2014-12-19 2017-12-26 Snap Inc. Gallery of videos set to an audio time line
US9866999B1 (en) 2014-01-12 2018-01-09 Investment Asset Holdings Llc Location-based messaging
US9881094B2 (en) 2015-05-05 2018-01-30 Snap Inc. Systems and methods for automated local story generation and curation
US9882907B1 (en) 2012-11-08 2018-01-30 Snap Inc. Apparatus and method for single action control of social network profile access
US9936030B2 (en) 2014-01-03 2018-04-03 Investel Capital Corporation User content sharing system and method with location-based external content integration
US10055717B1 (en) 2014-08-22 2018-08-21 Snap Inc. Message processor with application prompts
US10082926B1 (en) 2014-02-21 2018-09-25 Snap Inc. Apparatus and method for alternate channel communication initiated through a common message thread
US10102680B2 (en) 2015-10-30 2018-10-16 Snap Inc. Image based tracking in augmented reality systems
US10123166B2 (en) 2015-01-26 2018-11-06 Snap Inc. Content request by location
US10135949B1 (en) 2015-05-05 2018-11-20 Snap Inc. Systems and methods for story and sub-story navigation
US10133705B1 (en) 2015-01-19 2018-11-20 Snap Inc. Multichannel system
US10157449B1 (en) 2015-01-09 2018-12-18 Snap Inc. Geo-location-based image filters
US10165402B1 (en) 2016-06-28 2018-12-25 Snap Inc. System to track engagement of media items
US10203855B2 (en) 2016-12-09 2019-02-12 Snap Inc. Customized user-controlled media overlays
US10219111B1 (en) 2018-04-18 2019-02-26 Snap Inc. Visitation tracking system
US10223397B1 (en) 2015-03-13 2019-03-05 Snap Inc. Social graph based co-location of network users
US10284508B1 (en) 2014-10-02 2019-05-07 Snap Inc. Ephemeral gallery of ephemeral messages with opt-in permanence
US10311916B2 (en) 2014-12-19 2019-06-04 Snap Inc. Gallery of videos set to an audio time line
US10319149B1 (en) 2017-02-17 2019-06-11 Snap Inc. Augmented reality anamorphosis system
US10327096B1 (en) 2018-03-06 2019-06-18 Snap Inc. Geo-fence selection system
US10334307B2 (en) 2011-07-12 2019-06-25 Snap Inc. Methods and systems of providing visual content editing functions
US10348662B2 (en) 2016-07-19 2019-07-09 Snap Inc. Generating customized electronic messaging graphics
US10354425B2 (en) 2015-12-18 2019-07-16 Snap Inc. Method and system for providing context relevant media augmentation
US10387514B1 (en) 2016-06-30 2019-08-20 Snap Inc. Automated content curation and communication
US10387730B1 (en) 2017-04-20 2019-08-20 Snap Inc. Augmented reality typography personalization system
US10423983B2 (en) 2014-09-16 2019-09-24 Snap Inc. Determining targeting information based on a predictive targeting model
US10430838B1 (en) 2016-06-28 2019-10-01 Snap Inc. Methods and systems for generation, curation, and presentation of media collections with automated advertising
US10439972B1 (en) 2013-05-30 2019-10-08 Snap Inc. Apparatus and method for maintaining a message thread with opt-in permanence for entries
US10474321B2 (en) 2015-11-30 2019-11-12 Snap Inc. Network resource location linking and visual content sharing
US10499191B1 (en) 2017-10-09 2019-12-03 Snap Inc. Context sensitive presentation of content
US10523625B1 (en) 2017-03-09 2019-12-31 Snap Inc. Restricted group content collection
US10582277B2 (en) 2017-03-27 2020-03-03 Snap Inc. Generating a stitched data stream
US10581782B2 (en) 2017-03-27 2020-03-03 Snap Inc. Generating a stitched data stream
US10614828B1 (en) 2017-02-20 2020-04-07 Snap Inc. Augmented reality speech balloon system
US10616239B2 (en) 2015-03-18 2020-04-07 Snap Inc. Geo-fence authorization provisioning
US10623666B2 (en) 2016-11-07 2020-04-14 Snap Inc. Selective identification and order of image modifiers
US10638256B1 (en) 2016-06-20 2020-04-28 Pipbin, Inc. System for distribution and display of mobile targeted augmented reality content
US10657708B1 (en) 2015-11-30 2020-05-19 Snap Inc. Image and point cloud based tracking and in augmented reality systems
US10679413B2 (en) 2014-06-10 2020-06-09 2Mee Ltd Augmented reality apparatus and method
US10679393B2 (en) 2018-07-24 2020-06-09 Snap Inc. Conditional modification of augmented reality object
US10678818B2 (en) 2018-01-03 2020-06-09 Snap Inc. Tag distribution visualization system
US10679389B2 (en) 2016-02-26 2020-06-09 Snap Inc. Methods and systems for generation, curation, and presentation of media collections
US10740974B1 (en) 2017-09-15 2020-08-11 Snap Inc. Augmented reality system
US10805696B1 (en) 2016-06-20 2020-10-13 Pipbin, Inc. System for recording and targeting tagged content of user interest
US10817898B2 (en) 2015-08-13 2020-10-27 Placed, Llc Determining exposures to content presented by physical objects
US10824654B2 (en) 2014-09-18 2020-11-03 Snap Inc. Geolocation-based pictographs
US10834525B2 (en) 2016-02-26 2020-11-10 Snap Inc. Generation, curation, and presentation of media collections
US10839219B1 (en) 2016-06-20 2020-11-17 Pipbin, Inc. System for curation, distribution and display of location-dependent augmented reality content
US10856037B2 (en) * 2014-03-20 2020-12-01 2MEE Ltd. Augmented reality apparatus and method
US10862951B1 (en) 2007-01-05 2020-12-08 Snap Inc. Real-time display of multiple images
US10885136B1 (en) 2018-02-28 2021-01-05 Snap Inc. Audience filtering system
US10915911B2 (en) 2017-02-03 2021-02-09 Snap Inc. System to determine a price-schedule to distribute media content
US10933311B2 (en) 2018-03-14 2021-03-02 Snap Inc. Generating collectible items based on location information
US10948717B1 (en) 2015-03-23 2021-03-16 Snap Inc. Reducing boot time and power consumption in wearable display systems
US10952013B1 (en) 2017-04-27 2021-03-16 Snap Inc. Selective location-based identity communication
US10963529B1 (en) 2017-04-27 2021-03-30 Snap Inc. Location-based search mechanism in a graphical user interface
US10979752B1 (en) 2018-02-28 2021-04-13 Snap Inc. Generating media content items based on location information
US10993069B2 (en) 2015-07-16 2021-04-27 Snap Inc. Dynamically adaptive media content delivery
US10997760B2 (en) 2018-08-31 2021-05-04 Snap Inc. Augmented reality anthropomorphization system
US11017173B1 (en) 2017-12-22 2021-05-25 Snap Inc. Named entity recognition visual context and caption data
US11023514B2 (en) 2016-02-26 2021-06-01 Snap Inc. Methods and systems for generation, curation, and presentation of media collections
US11030787B2 (en) 2017-10-30 2021-06-08 Snap Inc. Mobile-based cartographic control of display content
US11037372B2 (en) 2017-03-06 2021-06-15 Snap Inc. Virtual vision system
US20210185400A1 (en) * 2017-12-29 2021-06-17 Rovi Guides, Inc. Systems and methods for modifying fast-forward speeds based on the user's reaction time when detecting points of interest in content
CN112987035A (en) * 2021-02-07 2021-06-18 北京中交创新投资发展有限公司 Beidou edge computing equipment and method for acquiring inspection facilities based on equipment
US11044393B1 (en) 2016-06-20 2021-06-22 Pipbin, Inc. System for curation and display of location-dependent augmented reality content in an augmented estate system
US11128715B1 (en) 2019-12-30 2021-09-21 Snap Inc. Physical friend proximity in chat
US11163941B1 (en) 2018-03-30 2021-11-02 Snap Inc. Annotating a collection of media content items
US11170393B1 (en) 2017-04-11 2021-11-09 Snap Inc. System to calculate an engagement score of location based media content
CN113654548A (en) * 2021-07-16 2021-11-16 北京百度网讯科技有限公司 Positioning method, positioning device, electronic equipment and storage medium
US11182383B1 (en) 2012-02-24 2021-11-23 Placed, Llc System and method for data collection to validate location data
US11201981B1 (en) 2016-06-20 2021-12-14 Pipbin, Inc. System for notification of user accessibility of curated location-dependent content in an augmented estate
US11199957B1 (en) 2018-11-30 2021-12-14 Snap Inc. Generating customized avatars based on location information
US11206615B2 (en) 2019-05-30 2021-12-21 Snap Inc. Wearable device location systems
US11218838B2 (en) 2019-10-31 2022-01-04 Snap Inc. Focused map-based context information surfacing
US11216869B2 (en) 2014-09-23 2022-01-04 Snap Inc. User interface to augment an image using geolocation
US11228551B1 (en) 2020-02-12 2022-01-18 Snap Inc. Multiple gateway message exchange
US11232040B1 (en) 2017-04-28 2022-01-25 Snap Inc. Precaching unlockable data elements
US11250075B1 (en) 2017-02-17 2022-02-15 Snap Inc. Searching social media content
US11249614B2 (en) 2019-03-28 2022-02-15 Snap Inc. Generating personalized map interface with enhanced icons
US11265273B1 (en) 2017-12-01 2022-03-01 Snap, Inc. Dynamic media overlay with smart widget
US11280876B2 (en) 2018-06-18 2022-03-22 Qualcomm Incorporated Multi-radar coexistence using phase-coded frequency modulated continuous wave waveforms
US11290851B2 (en) 2020-06-15 2022-03-29 Snap Inc. Location sharing using offline and online objects
US11294936B1 (en) 2019-01-30 2022-04-05 Snap Inc. Adaptive spatial density based clustering
US11301117B2 (en) 2019-03-08 2022-04-12 Snap Inc. Contextual information in chat
US11314776B2 (en) 2020-06-15 2022-04-26 Snap Inc. Location sharing using friend list versions
US11343323B2 (en) 2019-12-31 2022-05-24 Snap Inc. Augmented reality objects registry
US11361493B2 (en) 2019-04-01 2022-06-14 Snap Inc. Semantic texture mapping system
US11388226B1 (en) 2015-01-13 2022-07-12 Snap Inc. Guided personal identity based actions
US11385323B2 (en) 2018-06-25 2022-07-12 Qualcomm Incorporated Selection of frequency modulated continuous wave (FMWC) waveform parameters for multi-radar coexistence
US11393200B2 (en) * 2017-04-20 2022-07-19 Digimarc Corporation Hybrid feature point/watermark-based augmented reality
US11430091B2 (en) 2020-03-27 2022-08-30 Snap Inc. Location mapping for large scale augmented-reality
US11429618B2 (en) 2019-12-30 2022-08-30 Snap Inc. Surfacing augmented reality objects
US11455565B2 (en) 2017-08-31 2022-09-27 Ford Global Technologies, Llc Augmenting real sensor recordings with simulated sensor data
US11455082B2 (en) 2018-09-28 2022-09-27 Snap Inc. Collaborative achievement interface
US11475254B1 (en) 2017-09-08 2022-10-18 Snap Inc. Multimodal entity identification
US11483267B2 (en) 2020-06-15 2022-10-25 Snap Inc. Location sharing using different rate-limited links
US11487988B2 (en) 2017-08-31 2022-11-01 Ford Global Technologies, Llc Augmenting real sensor recordings with simulated sensor data
US11500525B2 (en) 2019-02-25 2022-11-15 Snap Inc. Custom media overlay system
US11503432B2 (en) 2020-06-15 2022-11-15 Snap Inc. Scalable real-time location sharing framework
US11507614B1 (en) 2018-02-13 2022-11-22 Snap Inc. Icon based tagging
US11516167B2 (en) 2020-03-05 2022-11-29 Snap Inc. Storing data based on device location
US11558709B2 (en) 2018-11-30 2023-01-17 Snap Inc. Position service to determine relative position to map features
US11574431B2 (en) 2019-02-26 2023-02-07 Snap Inc. Avatar based on weather
US11585889B2 (en) 2018-07-25 2023-02-21 Qualcomm Incorporated Methods for radar coexistence
US11601888B2 (en) 2021-03-29 2023-03-07 Snap Inc. Determining location using multi-source geolocation data
US11601783B2 (en) 2019-06-07 2023-03-07 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
US11606755B2 (en) 2019-05-30 2023-03-14 Snap Inc. Wearable device location systems architecture
US11616745B2 (en) 2017-01-09 2023-03-28 Snap Inc. Contextual generation and selection of customized media content
US11619501B2 (en) 2020-03-11 2023-04-04 Snap Inc. Avatar based on trip
US11625443B2 (en) 2014-06-05 2023-04-11 Snap Inc. Web document enhancement
US11631276B2 (en) 2016-03-31 2023-04-18 Snap Inc. Automated avatar generation
US11644529B2 (en) * 2018-03-26 2023-05-09 Qualcomm Incorporated Using a side-communication channel for exchanging radar information to improve multi-radar coexistence
US11645324B2 (en) 2021-03-31 2023-05-09 Snap Inc. Location-based timeline media content system
US11675831B2 (en) 2017-05-31 2023-06-13 Snap Inc. Geolocation based playlists
US11676378B2 (en) 2020-06-29 2023-06-13 Snap Inc. Providing travel-based augmented reality content with a captured image
US11714535B2 (en) 2019-07-11 2023-08-01 Snap Inc. Edge gesture interface with smart interactions
US11729343B2 (en) 2019-12-30 2023-08-15 Snap Inc. Including video feed in message thread
US11734712B2 (en) 2012-02-24 2023-08-22 Foursquare Labs, Inc. Attributing in-store visits to media consumption based on data collected from user devices
US11751015B2 (en) 2019-01-16 2023-09-05 Snap Inc. Location-based context information sharing in a messaging system
US11776256B2 (en) 2020-03-27 2023-10-03 Snap Inc. Shared augmented reality system
US11785161B1 (en) 2016-06-20 2023-10-10 Pipbin, Inc. System for user accessibility of tagged curated augmented reality content
US11799811B2 (en) 2018-10-31 2023-10-24 Snap Inc. Messaging and gaming applications communication platform
US11809624B2 (en) 2019-02-13 2023-11-07 Snap Inc. Sleep detection in a location sharing system
US11816853B2 (en) 2016-08-30 2023-11-14 Snap Inc. Systems and methods for simultaneous localization and mapping
US11821742B2 (en) 2019-09-26 2023-11-21 Snap Inc. Travel based notifications
US11829834B2 (en) 2021-10-29 2023-11-28 Snap Inc. Extended QR code
US11843456B2 (en) 2016-10-24 2023-12-12 Snap Inc. Generating and displaying customized avatars in media overlays
US11842411B2 (en) 2017-04-27 2023-12-12 Snap Inc. Location-based virtual avatars
US11852554B1 (en) 2019-03-21 2023-12-26 Snap Inc. Barometer calibration in a location sharing system
US11860888B2 (en) 2018-05-22 2024-01-02 Snap Inc. Event detection system
US11868414B1 (en) 2019-03-14 2024-01-09 Snap Inc. Graph-based prediction for contact suggestion in a location sharing system
US11870743B1 (en) 2017-01-23 2024-01-09 Snap Inc. Customized digital avatar accessories
US11877211B2 (en) 2019-01-14 2024-01-16 Snap Inc. Destination sharing in location sharing system
US11876941B1 (en) 2016-06-20 2024-01-16 Pipbin, Inc. Clickable augmented reality content manager, system, and network
US11893208B2 (en) 2019-12-31 2024-02-06 Snap Inc. Combined map icon with action indicator
US11900418B2 (en) 2016-04-04 2024-02-13 Snap Inc. Mutable geo-fencing system
US11925869B2 (en) 2012-05-08 2024-03-12 Snap Inc. System and method for generating and displaying avatars
US11943192B2 (en) 2020-08-31 2024-03-26 Snap Inc. Co-location connection service
US11961116B2 (en) 2020-10-26 2024-04-16 Foursquare Labs, Inc. Determining exposures to content presented by physical objects

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10083621B2 (en) 2004-05-27 2018-09-25 Zedasoft, Inc. System and method for streaming video into a container-based architecture simulation
JP6000954B2 (en) 2010-09-20 2016-10-05 クゥアルコム・インコーポレイテッドQualcomm Incorporated An adaptive framework for cloud-assisted augmented reality
US8810598B2 (en) 2011-04-08 2014-08-19 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms
US8493353B2 (en) 2011-04-13 2013-07-23 Longsand Limited Methods and systems for generating and joining shared experience
US9131284B2 (en) * 2013-01-04 2015-09-08 Omnivision Technologies, Inc. Video-in-video video stream having a three layer video scene
US8942535B1 (en) * 2013-04-04 2015-01-27 Google Inc. Implicit video location augmentation
US9582516B2 (en) 2013-10-17 2017-02-28 Nant Holdings Ip, Llc Wide area augmented reality location-based services
KR102223308B1 (en) * 2014-05-29 2021-03-08 삼성전자 주식회사 Method for image processing and electronic device implementing the same
US9373360B2 (en) 2014-07-02 2016-06-21 International Business Machines Corporation Instantaneous preview of data associated with a video
US9591349B2 (en) * 2014-12-23 2017-03-07 Intel Corporation Interactive binocular video display
CN104899261B (en) * 2015-05-20 2018-04-03 杜晓通 A kind of apparatus and method for building structuring video image information
US9837124B2 (en) 2015-06-30 2017-12-05 Microsoft Technology Licensing, Llc Layered interactive video platform for interactive video experiences
CN105872843B (en) * 2016-04-18 2019-02-19 青岛海信电器股份有限公司 A kind of method and device playing video
CN106375793B (en) * 2016-08-29 2019-12-13 东方网力科技股份有限公司 video structured information superposition method, user terminal and superposition system
WO2019055679A1 (en) 2017-09-13 2019-03-21 Lahood Edward Rashid Method, apparatus and computer-readable media for displaying augmented reality information
EP3499438A1 (en) * 2017-12-13 2019-06-19 My Virtual Reality Software AS Method and system providing augmented reality for mining operations
US11064255B2 (en) * 2019-01-30 2021-07-13 Oohms Ny Llc System and method of tablet-based distribution of digital media content
US11297397B2 (en) 2019-09-07 2022-04-05 Mitsuru Okura Digital watermark embeded into images on surface of sports ball and system for detecting thereof
US11734784B2 (en) * 2019-11-14 2023-08-22 Sony Interactive Entertainment Inc. Metadata watermarking for ‘nested spectating’
CN113706891A (en) * 2020-05-20 2021-11-26 阿里巴巴集团控股有限公司 Traffic data transmission method, traffic data transmission device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6906643B2 (en) * 2003-04-30 2005-06-14 Hewlett-Packard Development Company, L.P. Systems and methods of viewing, modifying, and interacting with “path-enhanced” multimedia
US7050787B2 (en) * 2002-10-30 2006-05-23 Lockheed Martin Corporation Cooperative element location system
US20080077952A1 (en) * 2006-09-25 2008-03-27 St Jean Randy Dynamic Association of Advertisements and Digital Video Content, and Overlay of Advertisements on Content
US20080165843A1 (en) * 2007-01-03 2008-07-10 Human Monitoring Ltd. Architecture for image compression in a video hardware
US20080268876A1 (en) * 2007-04-24 2008-10-30 Natasha Gelfand Method, Device, Mobile Terminal, and Computer Program Product for a Point of Interest Based Scheme for Improving Mobile Visual Searching Functionalities
US20080268870A1 (en) * 2005-02-03 2008-10-30 Cyril Houri Method and System for Obtaining Location of a Mobile Device
US20110199479A1 (en) * 2010-02-12 2011-08-18 Apple Inc. Augmented reality maps
US8400548B2 (en) * 2010-01-05 2013-03-19 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices

Family Cites Families (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6222939B1 (en) 1996-06-25 2001-04-24 Eyematic Interfaces, Inc. Labeled bunch graphs for image analysis
US6400374B2 (en) 1996-09-18 2002-06-04 Eyematic Interfaces, Inc. Video superposition system and method
US6272231B1 (en) 1998-11-06 2001-08-07 Eyematic Interfaces, Inc. Wavelet-based facial motion capture for avatar animation
US6714661B2 (en) 1998-11-06 2004-03-30 Nevengineering, Inc. Method and system for customizing facial feature tracking using precise landmark finding on a neutral face image
US6563960B1 (en) 1999-09-28 2003-05-13 Hewlett-Packard Company Method for merging images
US7177651B1 (en) 2000-09-22 2007-02-13 Texas Instruments Incorporated System and method for the exchange of location information in a telephone network
US7333820B2 (en) 2001-07-17 2008-02-19 Networks In Motion, Inc. System and method for providing routing, mapping, and relative position information to users of a communication network
DE10150105A1 (en) 2001-10-11 2003-04-30 Siemens Ag Automatic determination of geometric models for optical part recognition
US7389526B1 (en) 2001-11-02 2008-06-17 At&T Delaware Intellectual Property, Inc. System and method for recording a digital video image
US6641037B2 (en) 2001-12-13 2003-11-04 Peter Williams Method and system for interactively providing product related information on demand and providing personalized transactional benefits at a point of purchase
US7084809B2 (en) 2002-07-15 2006-08-01 Qualcomm, Incorporated Apparatus and method of position determination using shared information
FR2842977A1 (en) 2002-07-24 2004-01-30 Total Immersion METHOD AND SYSTEM FOR ENABLING A USER TO MIX REAL-TIME SYNTHESIS IMAGES WITH VIDEO IMAGES
US7050786B2 (en) 2002-10-30 2006-05-23 Lockheed Martin Corporation Method and apparatus for locating a wireless device
US6845338B1 (en) 2003-02-25 2005-01-18 Symbol Technologies, Inc. Telemetric contextually based spatial audio system integrated into a mobile terminal wireless system
US8005958B2 (en) 2003-06-27 2011-08-23 Ixia Virtual interface
US7565139B2 (en) 2004-02-20 2009-07-21 Google Inc. Image-based search engine for mobile phones with camera
US7221902B2 (en) 2004-04-07 2007-05-22 Nokia Corporation Mobile station and interface adapted for feature extraction from an input media sample
US20050215239A1 (en) * 2004-03-26 2005-09-29 Nokia Corporation Feature extraction in a networked portable device
US20060218191A1 (en) 2004-08-31 2006-09-28 Gopalakrishnan Kumar C Method and System for Managing Multimedia Documents
US7765231B2 (en) 2005-04-08 2010-07-27 Rathus Spencer A System and method for accessing electronic data via an image search engine
US20080214153A1 (en) 2005-09-14 2008-09-04 Jorey Ramer Mobile User Profile Creation based on User Browse Behaviors
MX2007015979A (en) 2006-03-31 2009-04-07 Nielsen Media Res Inc Methods, systems, and apparatus for multi-purpose metering.
GB2436924A (en) 2006-04-08 2007-10-10 David Everett Portable security monitor
FR2911211B1 (en) 2007-01-05 2009-06-12 Total Immersion Sa METHOD AND DEVICES FOR REAL-TIME INSERTING VIRTUAL OBJECTS IN AN IMAGE STREAM FROM DATA FROM THE REAL SCENE REPRESENTED BY THESE IMAGES
FR2911707B1 (en) 2007-01-22 2009-07-10 Total Immersion Sa METHOD AND DEVICES FOR INCREASED REALITY USING REAL - TIME AUTOMATIC TRACKING OF TEXTURED, MARKER - FREE PLANAR GEOMETRIC OBJECTS IN A VIDEO STREAM.
EP1965344B1 (en) 2007-02-27 2017-06-28 Accenture Global Services Limited Remote object recognition
US20090176520A1 (en) 2007-04-12 2009-07-09 Telibrahma Convergent Communications Private Limited Generating User Contexts for Targeted Advertising
US8160980B2 (en) 2007-07-13 2012-04-17 Ydreams—Informatica, S.A. Information system based on time, space and relevance
US8644842B2 (en) 2007-09-04 2014-02-04 Nokia Corporation Personal augmented reality advertising
US8180396B2 (en) 2007-10-18 2012-05-15 Yahoo! Inc. User augmented reality for camera-enabled mobile devices
US8098881B2 (en) 2008-03-11 2012-01-17 Sony Ericsson Mobile Communications Ab Advertisement insertion systems and methods for digital cameras based on object recognition
US8732246B2 (en) 2008-03-14 2014-05-20 Madhavi Jayanthi Mobile social network for facilitating GPS based services
US8010327B2 (en) 2008-04-25 2011-08-30 Total Immersion Software, Inc. Composite assets for use in multiple simulation environments
US20090276154A1 (en) 2008-04-30 2009-11-05 Verizon Corporate Services Group Inc. Method and system for coordinating group travel among mobile devices
US8976027B2 (en) 2008-06-06 2015-03-10 Harris Corporation Information processing system for consumers at a store using personal mobile wireless devices and related methods
US8700301B2 (en) * 2008-06-19 2014-04-15 Microsoft Corporation Mobile computing devices, architecture and user interfaces based on dynamic direction information
US20100081458A1 (en) 2008-10-01 2010-04-01 Qualcomm Incorporated Mobile Terminal Motion Detection Methods and Systems
US7966641B2 (en) 2008-10-23 2011-06-21 Sony Corporation User identification using Bluetooth and audio ranging
WO2010051342A1 (en) 2008-11-03 2010-05-06 Veritrix, Inc. User authentication for social networks
JP5386946B2 (en) * 2008-11-26 2014-01-15 ソニー株式会社 Image processing apparatus, image processing method, image processing program, and image processing system
US20100309225A1 (en) 2009-06-03 2010-12-09 Gray Douglas R Image matching for mobile augmented reality
US20100325126A1 (en) 2009-06-18 2010-12-23 Rajaram Shyam S Recommendation based on low-rank approximation
KR101096392B1 (en) * 2010-01-29 2011-12-22 주식회사 팬택 System and method for providing augmented reality
US8958815B2 (en) 2010-02-12 2015-02-17 Broadcom Corporation Method and system for characterizing location and/or range based on transmit power
US8884871B2 (en) 2010-02-26 2014-11-11 Thl Holding Company, Llc Adjunct device for use with a handheld wireless communication device as a screen pointer
US8599011B2 (en) 2010-07-30 2013-12-03 Q-Track Corporation Firefighter location and rescue equipment employing path comparison of mobile tags

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7050787B2 (en) * 2002-10-30 2006-05-23 Lockheed Martin Corporation Cooperative element location system
US6906643B2 (en) * 2003-04-30 2005-06-14 Hewlett-Packard Development Company, L.P. Systems and methods of viewing, modifying, and interacting with “path-enhanced” multimedia
US20080268870A1 (en) * 2005-02-03 2008-10-30 Cyril Houri Method and System for Obtaining Location of a Mobile Device
US20080077952A1 (en) * 2006-09-25 2008-03-27 St Jean Randy Dynamic Association of Advertisements and Digital Video Content, and Overlay of Advertisements on Content
US20080165843A1 (en) * 2007-01-03 2008-07-10 Human Monitoring Ltd. Architecture for image compression in a video hardware
US20080268876A1 (en) * 2007-04-24 2008-10-30 Natasha Gelfand Method, Device, Mobile Terminal, and Computer Program Product for a Point of Interest Based Scheme for Improving Mobile Visual Searching Functionalities
US8400548B2 (en) * 2010-01-05 2013-03-19 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
US20110199479A1 (en) * 2010-02-12 2011-08-18 Apple Inc. Augmented reality maps

Cited By (382)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10862951B1 (en) 2007-01-05 2020-12-08 Snap Inc. Real-time display of multiple images
US11588770B2 (en) 2007-01-05 2023-02-21 Snap Inc. Real-time display of multiple images
US10334307B2 (en) 2011-07-12 2019-06-25 Snap Inc. Methods and systems of providing visual content editing functions
US11750875B2 (en) 2011-07-12 2023-09-05 Snap Inc. Providing visual content editing functions
US11451856B2 (en) 2011-07-12 2022-09-20 Snap Inc. Providing visual content editing functions
US10999623B2 (en) 2011-07-12 2021-05-04 Snap Inc. Providing visual content editing functions
US20130031202A1 (en) * 2011-07-26 2013-01-31 Mick Jason L Using Augmented Reality To Create An Interface For Datacenter And Systems Management
US9557807B2 (en) * 2011-07-26 2017-01-31 Rackspace Us, Inc. Using augmented reality to create an interface for datacenter and systems management
US20150040074A1 (en) * 2011-08-18 2015-02-05 Layar B.V. Methods and systems for enabling creation of augmented reality content
US20130051687A1 (en) * 2011-08-25 2013-02-28 Canon Kabushiki Kaisha Image processing system and image processing method
US10459968B2 (en) 2011-08-25 2019-10-29 Canon Kabushiki Kaisha Image processing system and image processing method
US9685000B1 (en) * 2011-09-28 2017-06-20 EMC IP Holding Company LLC Using augmented reality in data storage management
US20170116644A1 (en) * 2011-11-15 2017-04-27 Excalibur Ip, Llc Providing advertisements in an augmented reality environment
US9232194B2 (en) * 2011-11-29 2016-01-05 Canon Kabushiki Kaisha Imaging apparatus, display method, and storage medium for presenting a candidate object information to a photographer
US20130135464A1 (en) * 2011-11-29 2013-05-30 Canon Kabushiki Kaisha Imaging apparatus, display method, and storage medium
US11734712B2 (en) 2012-02-24 2023-08-22 Foursquare Labs, Inc. Attributing in-store visits to media consumption based on data collected from user devices
US11182383B1 (en) 2012-02-24 2021-11-23 Placed, Llc System and method for data collection to validate location data
US9872143B2 (en) 2012-02-29 2018-01-16 Google Llc System and method for requesting an updated user location
US11265676B2 (en) 2012-02-29 2022-03-01 Google Llc System and method for requesting an updated user location
US9325797B2 (en) * 2012-02-29 2016-04-26 Google Inc. System and method for requesting an updated user location
US11825378B2 (en) 2012-02-29 2023-11-21 Google Llc System and method for requesting an updated user location
US20150172394A1 (en) * 2012-02-29 2015-06-18 Google Inc. System and method for requesting an updated user location
US10484821B2 (en) 2012-02-29 2019-11-19 Google Llc System and method for requesting an updated user location
US11925869B2 (en) 2012-05-08 2024-03-12 Snap Inc. System and method for generating and displaying avatars
US9721394B2 (en) 2012-08-22 2017-08-01 Snaps Media, Inc. Augmented reality virtual content platform apparatuses, methods and systems
US9792733B2 (en) 2012-08-22 2017-10-17 Snaps Media, Inc. Augmented reality virtual content platform apparatuses, methods and systems
US10169924B2 (en) 2012-08-22 2019-01-01 Snaps Media Inc. Augmented reality virtual content platform apparatuses, methods and systems
US11361542B2 (en) 2012-09-12 2022-06-14 2Mee Ltd Augmented reality apparatus and method
US20150242688A1 (en) * 2012-09-12 2015-08-27 2MEE Ltd. Augmented reality apparatus and method
US10885333B2 (en) * 2012-09-12 2021-01-05 2Mee Ltd Augmented reality apparatus and method
US20140095505A1 (en) * 2012-10-01 2014-04-03 Longsand Limited Performance and scalability in an intelligent data operating layer system
US9323767B2 (en) * 2012-10-01 2016-04-26 Longsand Limited Performance and scalability in an intelligent data operating layer system
US20140125870A1 (en) * 2012-11-05 2014-05-08 Exelis Inc. Image Display Utilizing Programmable and Multipurpose Processors
US9882907B1 (en) 2012-11-08 2018-01-30 Snap Inc. Apparatus and method for single action control of social network profile access
US10887308B1 (en) 2012-11-08 2021-01-05 Snap Inc. Interactive user-interface to adjust access privileges
US11252158B2 (en) 2012-11-08 2022-02-15 Snap Inc. Interactive user-interface to adjust access privileges
US20140298383A1 (en) * 2013-03-29 2014-10-02 Intellectual Discovery Co., Ltd. Server and method for transmitting personalized augmented reality object
US9705831B2 (en) 2013-05-30 2017-07-11 Snap Inc. Apparatus and method for maintaining a message thread with opt-in permanence for entries
US10587552B1 (en) 2013-05-30 2020-03-10 Snap Inc. Apparatus and method for maintaining a message thread with opt-in permanence for entries
US9742713B2 (en) 2013-05-30 2017-08-22 Snap Inc. Apparatus and method for maintaining a message thread with opt-in permanence for entries
US11509618B2 (en) 2013-05-30 2022-11-22 Snap Inc. Maintaining a message thread with opt-in permanence for entries
US10439972B1 (en) 2013-05-30 2019-10-08 Snap Inc. Apparatus and method for maintaining a message thread with opt-in permanence for entries
US11134046B2 (en) 2013-05-30 2021-09-28 Snap Inc. Apparatus and method for maintaining a message thread with opt-in permanence for entries
US11115361B2 (en) 2013-05-30 2021-09-07 Snap Inc. Apparatus and method for maintaining a message thread with opt-in permanence for entries
US9729825B2 (en) * 2013-07-09 2017-08-08 Alcatel Lucent Method for generating an immersive video of a plurality of persons
US20160150187A1 (en) * 2013-07-09 2016-05-26 Alcatel Lucent A method for generating an immersive video of a plurality of persons
US9558593B2 (en) * 2013-11-05 2017-01-31 Sony Corporation Terminal apparatus, additional information managing apparatus, additional information managing method, and program
US20150124106A1 (en) * 2013-11-05 2015-05-07 Sony Computer Entertainment Inc. Terminal apparatus, additional information managing apparatus, additional information managing method, and program
US10681092B1 (en) 2013-11-26 2020-06-09 Snap Inc. Method and system for integrating real time communication features in applications
US10069876B1 (en) 2013-11-26 2018-09-04 Snap Inc. Method and system for integrating real time communication features in applications
US11546388B2 (en) 2013-11-26 2023-01-03 Snap Inc. Method and system for integrating real time communication features in applications
US9083770B1 (en) 2013-11-26 2015-07-14 Snapchat, Inc. Method and system for integrating real time communication features in applications
US9794303B1 (en) 2013-11-26 2017-10-17 Snap Inc. Method and system for integrating real time communication features in applications
US11102253B2 (en) 2013-11-26 2021-08-24 Snap Inc. Method and system for integrating real time communication features in applications
US9936030B2 (en) 2014-01-03 2018-04-03 Investel Capital Corporation User content sharing system and method with location-based external content integration
US10080102B1 (en) 2014-01-12 2018-09-18 Investment Asset Holdings Llc Location-based messaging
US9866999B1 (en) 2014-01-12 2018-01-09 Investment Asset Holdings Llc Location-based messaging
US10349209B1 (en) 2014-01-12 2019-07-09 Investment Asset Holdings Llc Location-based messaging
US9473745B2 (en) * 2014-01-30 2016-10-18 Google Inc. System and method for providing live imagery associated with map locations
US9836826B1 (en) 2014-01-30 2017-12-05 Google Llc System and method for providing live imagery associated with map locations
US20150215585A1 (en) * 2014-01-30 2015-07-30 Google Inc. System and method for providing live imagery associated with map locations
US10084735B1 (en) 2014-02-21 2018-09-25 Snap Inc. Apparatus and method for alternate channel communication initiated through a common message thread
US10958605B1 (en) 2014-02-21 2021-03-23 Snap Inc. Apparatus and method for alternate channel communication initiated through a common message thread
US11902235B2 (en) 2014-02-21 2024-02-13 Snap Inc. Apparatus and method for alternate channel communication initiated through a common message thread
US10949049B1 (en) 2014-02-21 2021-03-16 Snap Inc. Apparatus and method for alternate channel communication initiated through a common message thread
US11463393B2 (en) 2014-02-21 2022-10-04 Snap Inc. Apparatus and method for alternate channel communication initiated through a common message thread
US10082926B1 (en) 2014-02-21 2018-09-25 Snap Inc. Apparatus and method for alternate channel communication initiated through a common message thread
US11463394B2 (en) 2014-02-21 2022-10-04 Snap Inc. Apparatus and method for alternate channel communication initiated through a common message thread
US9407712B1 (en) 2014-03-07 2016-08-02 Snapchat, Inc. Content delivery network for ephemeral objects
US9237202B1 (en) 2014-03-07 2016-01-12 Snapchat, Inc. Content delivery network for ephemeral objects
US10856037B2 (en) * 2014-03-20 2020-12-01 2MEE Ltd. Augmented reality apparatus and method
US11363325B2 (en) 2014-03-20 2022-06-14 2Mee Ltd Augmented reality apparatus and method
WO2015147760A1 (en) * 2014-03-24 2015-10-01 Varga Oliver Live transmission of video with parameters and device therefor
CN103957253A (en) * 2014-04-29 2014-07-30 天脉聚源(北京)传媒科技有限公司 Method and device cloud for cloud management
US11310183B2 (en) 2014-05-09 2022-04-19 Snap Inc. Dynamic configuration of application component tiles
US11743219B2 (en) 2014-05-09 2023-08-29 Snap Inc. Dynamic configuration of application component tiles
US10817156B1 (en) 2014-05-09 2020-10-27 Snap Inc. Dynamic configuration of application component tiles
US9276886B1 (en) 2014-05-09 2016-03-01 Snapchat, Inc. Apparatus and method for dynamically configuring application component tiles
US10990697B2 (en) 2014-05-28 2021-04-27 Snap Inc. Apparatus and method for automated privacy protection in distributed images
US9396354B1 (en) 2014-05-28 2016-07-19 Snapchat, Inc. Apparatus and method for automated privacy protection in distributed images
US9785796B1 (en) 2014-05-28 2017-10-10 Snap Inc. Apparatus and method for automated privacy protection in distributed images
US10572681B1 (en) 2014-05-28 2020-02-25 Snap Inc. Apparatus and method for automated privacy protection in distributed images
US11921805B2 (en) 2014-06-05 2024-03-05 Snap Inc. Web document enhancement
US11625443B2 (en) 2014-06-05 2023-04-11 Snap Inc. Web document enhancement
US11094131B2 (en) 2014-06-10 2021-08-17 2Mee Ltd Augmented reality apparatus and method
US10679413B2 (en) 2014-06-10 2020-06-09 2Mee Ltd Augmented reality apparatus and method
US9430783B1 (en) 2014-06-13 2016-08-30 Snapchat, Inc. Prioritization of messages within gallery
US11166121B2 (en) 2014-06-13 2021-11-02 Snap Inc. Prioritization of messages within a message collection
US10659914B1 (en) 2014-06-13 2020-05-19 Snap Inc. Geo-location based event gallery
US10623891B2 (en) 2014-06-13 2020-04-14 Snap Inc. Prioritization of messages within a message collection
US11317240B2 (en) 2014-06-13 2022-04-26 Snap Inc. Geo-location based event gallery
US9693191B2 (en) 2014-06-13 2017-06-27 Snap Inc. Prioritization of messages within gallery
US10448201B1 (en) 2014-06-13 2019-10-15 Snap Inc. Prioritization of messages within a message collection
US9825898B2 (en) 2014-06-13 2017-11-21 Snap Inc. Prioritization of messages within a message collection
US10779113B2 (en) 2014-06-13 2020-09-15 Snap Inc. Prioritization of messages within a message collection
US10200813B1 (en) 2014-06-13 2019-02-05 Snap Inc. Geo-location based event gallery
US9532171B2 (en) 2014-06-13 2016-12-27 Snap Inc. Geo-location based event gallery
US10182311B2 (en) 2014-06-13 2019-01-15 Snap Inc. Prioritization of messages within a message collection
US9113301B1 (en) 2014-06-13 2015-08-18 Snapchat, Inc. Geo-location based event gallery
US9094137B1 (en) 2014-06-13 2015-07-28 Snapchat, Inc. Priority based placement of messages in a geo-location based event gallery
US10524087B1 (en) 2014-06-13 2019-12-31 Snap Inc. Message destination list mechanism
US10602057B1 (en) * 2014-07-07 2020-03-24 Snap Inc. Supplying content aware photo filters
US11595569B2 (en) 2014-07-07 2023-02-28 Snap Inc. Supplying content aware photo filters
US11122200B2 (en) 2014-07-07 2021-09-14 Snap Inc. Supplying content aware photo filters
US20230020575A1 (en) * 2014-07-07 2023-01-19 Snap Inc. Apparatus and method for supplying content aware photo filters
US11496673B1 (en) 2014-07-07 2022-11-08 Snap Inc. Apparatus and method for supplying content aware photo filters
US10348960B1 (en) * 2014-07-07 2019-07-09 Snap Inc. Apparatus and method for supplying content aware photo filters
US9225897B1 (en) * 2014-07-07 2015-12-29 Snapchat, Inc. Apparatus and method for supplying content aware photo filters
US10701262B1 (en) * 2014-07-07 2020-06-30 Snap Inc. Apparatus and method for supplying content aware photo filters
US11849214B2 (en) * 2014-07-07 2023-12-19 Snap Inc. Apparatus and method for supplying content aware photo filters
US10154192B1 (en) 2014-07-07 2018-12-11 Snap Inc. Apparatus and method for supplying content aware photo filters
US9407816B1 (en) 2014-07-07 2016-08-02 Snapchat, Inc. Apparatus and method for supplying content aware photo filters
US10432850B1 (en) 2014-07-07 2019-10-01 Snap Inc. Apparatus and method for supplying content aware photo filters
US11017363B1 (en) 2014-08-22 2021-05-25 Snap Inc. Message processor with application prompts
US10055717B1 (en) 2014-08-22 2018-08-21 Snap Inc. Message processor with application prompts
US11625755B1 (en) 2014-09-16 2023-04-11 Foursquare Labs, Inc. Determining targeting information based on a predictive targeting model
US10423983B2 (en) 2014-09-16 2019-09-24 Snap Inc. Determining targeting information based on a predictive targeting model
US11741136B2 (en) 2014-09-18 2023-08-29 Snap Inc. Geolocation-based pictographs
US10824654B2 (en) 2014-09-18 2020-11-03 Snap Inc. Geolocation-based pictographs
US11281701B2 (en) 2014-09-18 2022-03-22 Snap Inc. Geolocation-based pictographs
US11216869B2 (en) 2014-09-23 2022-01-04 Snap Inc. User interface to augment an image using geolocation
US10708210B1 (en) 2014-10-02 2020-07-07 Snap Inc. Multi-user ephemeral message gallery
US10476830B2 (en) 2014-10-02 2019-11-12 Snap Inc. Ephemeral gallery of ephemeral messages
US20170374003A1 (en) 2014-10-02 2017-12-28 Snapchat, Inc. Ephemeral gallery of ephemeral messages
US10284508B1 (en) 2014-10-02 2019-05-07 Snap Inc. Ephemeral gallery of ephemeral messages with opt-in permanence
US11411908B1 (en) 2014-10-02 2022-08-09 Snap Inc. Ephemeral message gallery user interface with online viewing history indicia
US11012398B1 (en) 2014-10-02 2021-05-18 Snap Inc. Ephemeral message gallery user interface with screenshot messages
US9537811B2 (en) 2014-10-02 2017-01-03 Snap Inc. Ephemeral gallery of ephemeral messages
US11522822B1 (en) 2014-10-02 2022-12-06 Snap Inc. Ephemeral gallery elimination based on gallery and message timers
US11038829B1 (en) 2014-10-02 2021-06-15 Snap Inc. Ephemeral gallery of ephemeral messages with opt-in permanence
US11855947B1 (en) 2014-10-02 2023-12-26 Snap Inc. Gallery of ephemeral messages
US10944710B1 (en) 2014-10-02 2021-03-09 Snap Inc. Ephemeral gallery user interface with remaining gallery time indication
US10958608B1 (en) 2014-10-02 2021-03-23 Snap Inc. Ephemeral gallery of visual media messages
US11956533B2 (en) 2014-11-12 2024-04-09 Snap Inc. Accessing media at a geographic location
US11190679B2 (en) 2014-11-12 2021-11-30 Snap Inc. Accessing media at a geographic location
US10616476B1 (en) 2014-11-12 2020-04-07 Snap Inc. User interface for accessing media at a geographic location
US9843720B1 (en) 2014-11-12 2017-12-12 Snap Inc. User interface for accessing media at a geographic location
US11250887B2 (en) 2014-12-19 2022-02-15 Snap Inc. Routing messages by message parameter
US11803345B2 (en) 2014-12-19 2023-10-31 Snap Inc. Gallery of messages from individuals with a shared interest
US10811053B2 (en) 2014-12-19 2020-10-20 Snap Inc. Routing messages by message parameter
US10580458B2 (en) 2014-12-19 2020-03-03 Snap Inc. Gallery of videos set to an audio time line
US10514876B2 (en) 2014-12-19 2019-12-24 Snap Inc. Gallery of messages from individuals with a shared interest
US9854219B2 (en) 2014-12-19 2017-12-26 Snap Inc. Gallery of videos set to an audio time line
US10311916B2 (en) 2014-12-19 2019-06-04 Snap Inc. Gallery of videos set to an audio time line
US9385983B1 (en) 2014-12-19 2016-07-05 Snapchat, Inc. Gallery of messages from individuals with a shared interest
US11783862B2 (en) 2014-12-19 2023-10-10 Snap Inc. Routing messages by message parameter
US11372608B2 (en) 2014-12-19 2022-06-28 Snap Inc. Gallery of messages from individuals with a shared interest
US10157449B1 (en) 2015-01-09 2018-12-18 Snap Inc. Geo-location-based image filters
US11734342B2 (en) 2015-01-09 2023-08-22 Snap Inc. Object recognition based image overlays
US11301960B2 (en) 2015-01-09 2022-04-12 Snap Inc. Object recognition based image filters
US10380720B1 (en) 2015-01-09 2019-08-13 Snap Inc. Location-based image filters
US11388226B1 (en) 2015-01-13 2022-07-12 Snap Inc. Guided personal identity based actions
US10133705B1 (en) 2015-01-19 2018-11-20 Snap Inc. Multichannel system
US10416845B1 (en) 2015-01-19 2019-09-17 Snap Inc. Multichannel system
US11249617B1 (en) 2015-01-19 2022-02-15 Snap Inc. Multichannel system
US10123166B2 (en) 2015-01-26 2018-11-06 Snap Inc. Content request by location
US10932085B1 (en) 2015-01-26 2021-02-23 Snap Inc. Content request by location
US11910267B2 (en) 2015-01-26 2024-02-20 Snap Inc. Content request by location
US10536800B1 (en) 2015-01-26 2020-01-14 Snap Inc. Content request by location
US11528579B2 (en) 2015-01-26 2022-12-13 Snap Inc. Content request by location
US10223397B1 (en) 2015-03-13 2019-03-05 Snap Inc. Social graph based co-location of network users
US11902287B2 (en) 2015-03-18 2024-02-13 Snap Inc. Geo-fence authorization provisioning
US10616239B2 (en) 2015-03-18 2020-04-07 Snap Inc. Geo-fence authorization provisioning
US10893055B2 (en) 2015-03-18 2021-01-12 Snap Inc. Geo-fence authorization provisioning
US11320651B2 (en) 2015-03-23 2022-05-03 Snap Inc. Reducing boot time and power consumption in displaying data content
US10948717B1 (en) 2015-03-23 2021-03-16 Snap Inc. Reducing boot time and power consumption in wearable display systems
US11662576B2 (en) 2015-03-23 2023-05-30 Snap Inc. Reducing boot time and power consumption in displaying data content
US10911575B1 (en) 2015-05-05 2021-02-02 Snap Inc. Systems and methods for story and sub-story navigation
US9881094B2 (en) 2015-05-05 2018-01-30 Snap Inc. Systems and methods for automated local story generation and curation
US10135949B1 (en) 2015-05-05 2018-11-20 Snap Inc. Systems and methods for story and sub-story navigation
US10592574B2 (en) 2015-05-05 2020-03-17 Snap Inc. Systems and methods for automated local story generation and curation
US11496544B2 (en) 2015-05-05 2022-11-08 Snap Inc. Story and sub-story navigation
US11392633B2 (en) 2015-05-05 2022-07-19 Snap Inc. Systems and methods for automated local story generation and curation
US11449539B2 (en) 2015-05-05 2022-09-20 Snap Inc. Automated local story generation and curation
EP3110162A1 (en) * 2015-06-25 2016-12-28 STMicroelectronics International N.V. Enhanced augmented reality multimedia system
US10993069B2 (en) 2015-07-16 2021-04-27 Snap Inc. Dynamically adaptive media content delivery
US10817898B2 (en) 2015-08-13 2020-10-27 Placed, Llc Determining exposures to content presented by physical objects
US10733802B2 (en) 2015-10-30 2020-08-04 Snap Inc. Image based tracking in augmented reality systems
US11769307B2 (en) 2015-10-30 2023-09-26 Snap Inc. Image based tracking in augmented reality systems
US10366543B1 (en) 2015-10-30 2019-07-30 Snap Inc. Image based tracking in augmented reality systems
US10102680B2 (en) 2015-10-30 2018-10-16 Snap Inc. Image based tracking in augmented reality systems
US11315331B2 (en) 2015-10-30 2022-04-26 Snap Inc. Image based tracking in augmented reality systems
US10474321B2 (en) 2015-11-30 2019-11-12 Snap Inc. Network resource location linking and visual content sharing
US10997783B2 (en) 2015-11-30 2021-05-04 Snap Inc. Image and point cloud based tracking and in augmented reality systems
US11599241B2 (en) 2015-11-30 2023-03-07 Snap Inc. Network resource location linking and visual content sharing
US11380051B2 (en) 2015-11-30 2022-07-05 Snap Inc. Image and point cloud based tracking and in augmented reality systems
US10657708B1 (en) 2015-11-30 2020-05-19 Snap Inc. Image and point cloud based tracking and in augmented reality systems
US10354425B2 (en) 2015-12-18 2019-07-16 Snap Inc. Method and system for providing context relevant media augmentation
US11468615B2 (en) 2015-12-18 2022-10-11 Snap Inc. Media overlay publication system
US11830117B2 (en) 2015-12-18 2023-11-28 Snap Inc Media overlay publication system
US10997758B1 (en) 2015-12-18 2021-05-04 Snap Inc. Media overlay publication system
US11889381B2 (en) 2016-02-26 2024-01-30 Snap Inc. Generation, curation, and presentation of media collections
US11197123B2 (en) 2016-02-26 2021-12-07 Snap Inc. Generation, curation, and presentation of media collections
US11023514B2 (en) 2016-02-26 2021-06-01 Snap Inc. Methods and systems for generation, curation, and presentation of media collections
US11611846B2 (en) 2016-02-26 2023-03-21 Snap Inc. Generation, curation, and presentation of media collections
US10834525B2 (en) 2016-02-26 2020-11-10 Snap Inc. Generation, curation, and presentation of media collections
US10679389B2 (en) 2016-02-26 2020-06-09 Snap Inc. Methods and systems for generation, curation, and presentation of media collections
US11631276B2 (en) 2016-03-31 2023-04-18 Snap Inc. Automated avatar generation
US11900418B2 (en) 2016-04-04 2024-02-13 Snap Inc. Mutable geo-fencing system
US11785161B1 (en) 2016-06-20 2023-10-10 Pipbin, Inc. System for user accessibility of tagged curated augmented reality content
US11201981B1 (en) 2016-06-20 2021-12-14 Pipbin, Inc. System for notification of user accessibility of curated location-dependent content in an augmented estate
US10992836B2 (en) 2016-06-20 2021-04-27 Pipbin, Inc. Augmented property system of curated augmented reality media elements
US10638256B1 (en) 2016-06-20 2020-04-28 Pipbin, Inc. System for distribution and display of mobile targeted augmented reality content
US10839219B1 (en) 2016-06-20 2020-11-17 Pipbin, Inc. System for curation, distribution and display of location-dependent augmented reality content
US11876941B1 (en) 2016-06-20 2024-01-16 Pipbin, Inc. Clickable augmented reality content manager, system, and network
US10805696B1 (en) 2016-06-20 2020-10-13 Pipbin, Inc. System for recording and targeting tagged content of user interest
US11044393B1 (en) 2016-06-20 2021-06-22 Pipbin, Inc. System for curation and display of location-dependent augmented reality content in an augmented estate system
US10165402B1 (en) 2016-06-28 2018-12-25 Snap Inc. System to track engagement of media items
US10885559B1 (en) 2016-06-28 2021-01-05 Snap Inc. Generation, curation, and presentation of media collections with automated advertising
US11445326B2 (en) 2016-06-28 2022-09-13 Snap Inc. Track engagement of media items
US10430838B1 (en) 2016-06-28 2019-10-01 Snap Inc. Methods and systems for generation, curation, and presentation of media collections with automated advertising
US11640625B2 (en) 2016-06-28 2023-05-02 Snap Inc. Generation, curation, and presentation of media collections with automated advertising
US10735892B2 (en) 2016-06-28 2020-08-04 Snap Inc. System to track engagement of media items
US10219110B2 (en) 2016-06-28 2019-02-26 Snap Inc. System to track engagement of media items
US10785597B2 (en) 2016-06-28 2020-09-22 Snap Inc. System to track engagement of media items
US10506371B2 (en) 2016-06-28 2019-12-10 Snap Inc. System to track engagement of media items
US10327100B1 (en) 2016-06-28 2019-06-18 Snap Inc. System to track engagement of media items
US11895068B2 (en) 2016-06-30 2024-02-06 Snap Inc. Automated content curation and communication
US10387514B1 (en) 2016-06-30 2019-08-20 Snap Inc. Automated content curation and communication
US11080351B1 (en) 2016-06-30 2021-08-03 Snap Inc. Automated content curation and communication
US11509615B2 (en) 2016-07-19 2022-11-22 Snap Inc. Generating customized electronic messaging graphics
US10348662B2 (en) 2016-07-19 2019-07-09 Snap Inc. Generating customized electronic messaging graphics
US11816853B2 (en) 2016-08-30 2023-11-14 Snap Inc. Systems and methods for simultaneous localization and mapping
US11876762B1 (en) 2016-10-24 2024-01-16 Snap Inc. Generating and displaying customized avatars in media overlays
US11843456B2 (en) 2016-10-24 2023-12-12 Snap Inc. Generating and displaying customized avatars in media overlays
US10623666B2 (en) 2016-11-07 2020-04-14 Snap Inc. Selective identification and order of image modifiers
US11233952B2 (en) 2016-11-07 2022-01-25 Snap Inc. Selective identification and order of image modifiers
US11750767B2 (en) 2016-11-07 2023-09-05 Snap Inc. Selective identification and order of image modifiers
US10754525B1 (en) 2016-12-09 2020-08-25 Snap Inc. Customized media overlays
US11397517B2 (en) 2016-12-09 2022-07-26 Snap Inc. Customized media overlays
US10203855B2 (en) 2016-12-09 2019-02-12 Snap Inc. Customized user-controlled media overlays
US11616745B2 (en) 2017-01-09 2023-03-28 Snap Inc. Contextual generation and selection of customized media content
US11870743B1 (en) 2017-01-23 2024-01-09 Snap Inc. Customized digital avatar accessories
US10915911B2 (en) 2017-02-03 2021-02-09 Snap Inc. System to determine a price-schedule to distribute media content
US11861795B1 (en) 2017-02-17 2024-01-02 Snap Inc. Augmented reality anamorphosis system
US11720640B2 (en) 2017-02-17 2023-08-08 Snap Inc. Searching social media content
US10319149B1 (en) 2017-02-17 2019-06-11 Snap Inc. Augmented reality anamorphosis system
US11250075B1 (en) 2017-02-17 2022-02-15 Snap Inc. Searching social media content
US11748579B2 (en) 2017-02-20 2023-09-05 Snap Inc. Augmented reality speech balloon system
US10614828B1 (en) 2017-02-20 2020-04-07 Snap Inc. Augmented reality speech balloon system
US11189299B1 (en) 2017-02-20 2021-11-30 Snap Inc. Augmented reality speech balloon system
US11037372B2 (en) 2017-03-06 2021-06-15 Snap Inc. Virtual vision system
US11670057B2 (en) 2017-03-06 2023-06-06 Snap Inc. Virtual vision system
US10523625B1 (en) 2017-03-09 2019-12-31 Snap Inc. Restricted group content collection
US11258749B2 (en) 2017-03-09 2022-02-22 Snap Inc. Restricted group content collection
US10887269B1 (en) 2017-03-09 2021-01-05 Snap Inc. Restricted group content collection
US11349796B2 (en) 2017-03-27 2022-05-31 Snap Inc. Generating a stitched data stream
US10582277B2 (en) 2017-03-27 2020-03-03 Snap Inc. Generating a stitched data stream
US10581782B2 (en) 2017-03-27 2020-03-03 Snap Inc. Generating a stitched data stream
US11297399B1 (en) 2017-03-27 2022-04-05 Snap Inc. Generating a stitched data stream
US11558678B2 (en) 2017-03-27 2023-01-17 Snap Inc. Generating a stitched data stream
US11170393B1 (en) 2017-04-11 2021-11-09 Snap Inc. System to calculate an engagement score of location based media content
US11195018B1 (en) 2017-04-20 2021-12-07 Snap Inc. Augmented reality typography personalization system
US10387730B1 (en) 2017-04-20 2019-08-20 Snap Inc. Augmented reality typography personalization system
US11393200B2 (en) * 2017-04-20 2022-07-19 Digimarc Corporation Hybrid feature point/watermark-based augmented reality
US11782574B2 (en) 2017-04-27 2023-10-10 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US11409407B2 (en) 2017-04-27 2022-08-09 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US11842411B2 (en) 2017-04-27 2023-12-12 Snap Inc. Location-based virtual avatars
US11893647B2 (en) 2017-04-27 2024-02-06 Snap Inc. Location-based virtual avatars
US11418906B2 (en) 2017-04-27 2022-08-16 Snap Inc. Selective location-based identity communication
US11474663B2 (en) 2017-04-27 2022-10-18 Snap Inc. Location-based search mechanism in a graphical user interface
US11556221B2 (en) 2017-04-27 2023-01-17 Snap Inc. Friend location sharing mechanism for social media platforms
US11451956B1 (en) 2017-04-27 2022-09-20 Snap Inc. Location privacy management on map-based social media platforms
US11392264B1 (en) 2017-04-27 2022-07-19 Snap Inc. Map-based graphical user interface for multi-type social media galleries
US10952013B1 (en) 2017-04-27 2021-03-16 Snap Inc. Selective location-based identity communication
US10963529B1 (en) 2017-04-27 2021-03-30 Snap Inc. Location-based search mechanism in a graphical user interface
US11385763B2 (en) 2017-04-27 2022-07-12 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US11232040B1 (en) 2017-04-28 2022-01-25 Snap Inc. Precaching unlockable data elements
US11675831B2 (en) 2017-05-31 2023-06-13 Snap Inc. Geolocation based playlists
US11487988B2 (en) 2017-08-31 2022-11-01 Ford Global Technologies, Llc Augmenting real sensor recordings with simulated sensor data
US11455565B2 (en) 2017-08-31 2022-09-27 Ford Global Technologies, Llc Augmenting real sensor recordings with simulated sensor data
US11475254B1 (en) 2017-09-08 2022-10-18 Snap Inc. Multimodal entity identification
US11335067B2 (en) 2017-09-15 2022-05-17 Snap Inc. Augmented reality system
US11721080B2 (en) 2017-09-15 2023-08-08 Snap Inc. Augmented reality system
US10740974B1 (en) 2017-09-15 2020-08-11 Snap Inc. Augmented reality system
US11617056B2 (en) 2017-10-09 2023-03-28 Snap Inc. Context sensitive presentation of content
US11006242B1 (en) 2017-10-09 2021-05-11 Snap Inc. Context sensitive presentation of content
US10499191B1 (en) 2017-10-09 2019-12-03 Snap Inc. Context sensitive presentation of content
US11030787B2 (en) 2017-10-30 2021-06-08 Snap Inc. Mobile-based cartographic control of display content
US11670025B2 (en) 2017-10-30 2023-06-06 Snap Inc. Mobile-based cartographic control of display content
US11943185B2 (en) 2017-12-01 2024-03-26 Snap Inc. Dynamic media overlay with smart widget
US11558327B2 (en) 2017-12-01 2023-01-17 Snap Inc. Dynamic media overlay with smart widget
US11265273B1 (en) 2017-12-01 2022-03-01 Snap, Inc. Dynamic media overlay with smart widget
US11017173B1 (en) 2017-12-22 2021-05-25 Snap Inc. Named entity recognition visual context and caption data
US11687720B2 (en) 2017-12-22 2023-06-27 Snap Inc. Named entity recognition visual context and caption data
US20210185400A1 (en) * 2017-12-29 2021-06-17 Rovi Guides, Inc. Systems and methods for modifying fast-forward speeds based on the user's reaction time when detecting points of interest in content
US11743542B2 (en) * 2017-12-29 2023-08-29 Rovi Guides, Inc. Systems and methods for modifying fast-forward speeds based on the user's reaction time when detecting points of interest in content
US10678818B2 (en) 2018-01-03 2020-06-09 Snap Inc. Tag distribution visualization system
US11487794B2 (en) 2018-01-03 2022-11-01 Snap Inc. Tag distribution visualization system
US11507614B1 (en) 2018-02-13 2022-11-22 Snap Inc. Icon based tagging
US11841896B2 (en) 2018-02-13 2023-12-12 Snap Inc. Icon based tagging
US10885136B1 (en) 2018-02-28 2021-01-05 Snap Inc. Audience filtering system
US11523159B2 (en) 2018-02-28 2022-12-06 Snap Inc. Generating media content items based on location information
US10979752B1 (en) 2018-02-28 2021-04-13 Snap Inc. Generating media content items based on location information
US11044574B2 (en) 2018-03-06 2021-06-22 Snap Inc. Geo-fence selection system
US10524088B2 (en) 2018-03-06 2019-12-31 Snap Inc. Geo-fence selection system
US11722837B2 (en) 2018-03-06 2023-08-08 Snap Inc. Geo-fence selection system
US11570572B2 (en) 2018-03-06 2023-01-31 Snap Inc. Geo-fence selection system
US10327096B1 (en) 2018-03-06 2019-06-18 Snap Inc. Geo-fence selection system
US10933311B2 (en) 2018-03-14 2021-03-02 Snap Inc. Generating collectible items based on location information
US11491393B2 (en) 2018-03-14 2022-11-08 Snap Inc. Generating collectible items based on location information
US11644529B2 (en) * 2018-03-26 2023-05-09 Qualcomm Incorporated Using a side-communication channel for exchanging radar information to improve multi-radar coexistence
US11163941B1 (en) 2018-03-30 2021-11-02 Snap Inc. Annotating a collection of media content items
US10924886B2 (en) 2018-04-18 2021-02-16 Snap Inc. Visitation tracking system
US10219111B1 (en) 2018-04-18 2019-02-26 Snap Inc. Visitation tracking system
US10681491B1 (en) 2018-04-18 2020-06-09 Snap Inc. Visitation tracking system
US10448199B1 (en) 2018-04-18 2019-10-15 Snap Inc. Visitation tracking system
US11297463B2 (en) 2018-04-18 2022-04-05 Snap Inc. Visitation tracking system
US10779114B2 (en) 2018-04-18 2020-09-15 Snap Inc. Visitation tracking system
US11683657B2 (en) 2018-04-18 2023-06-20 Snap Inc. Visitation tracking system
US11860888B2 (en) 2018-05-22 2024-01-02 Snap Inc. Event detection system
US11280876B2 (en) 2018-06-18 2022-03-22 Qualcomm Incorporated Multi-radar coexistence using phase-coded frequency modulated continuous wave waveforms
US11385323B2 (en) 2018-06-25 2022-07-12 Qualcomm Incorporated Selection of frequency modulated continuous wave (FMWC) waveform parameters for multi-radar coexistence
US11367234B2 (en) 2018-07-24 2022-06-21 Snap Inc. Conditional modification of augmented reality object
US10789749B2 (en) 2018-07-24 2020-09-29 Snap Inc. Conditional modification of augmented reality object
US10943381B2 (en) 2018-07-24 2021-03-09 Snap Inc. Conditional modification of augmented reality object
US10679393B2 (en) 2018-07-24 2020-06-09 Snap Inc. Conditional modification of augmented reality object
US11670026B2 (en) 2018-07-24 2023-06-06 Snap Inc. Conditional modification of augmented reality object
US11585889B2 (en) 2018-07-25 2023-02-21 Qualcomm Incorporated Methods for radar coexistence
US11450050B2 (en) 2018-08-31 2022-09-20 Snap Inc. Augmented reality anthropomorphization system
US10997760B2 (en) 2018-08-31 2021-05-04 Snap Inc. Augmented reality anthropomorphization system
US11676319B2 (en) 2018-08-31 2023-06-13 Snap Inc. Augmented reality anthropomorphtzation system
US11704005B2 (en) 2018-09-28 2023-07-18 Snap Inc. Collaborative achievement interface
US11455082B2 (en) 2018-09-28 2022-09-27 Snap Inc. Collaborative achievement interface
US11799811B2 (en) 2018-10-31 2023-10-24 Snap Inc. Messaging and gaming applications communication platform
US11199957B1 (en) 2018-11-30 2021-12-14 Snap Inc. Generating customized avatars based on location information
US11812335B2 (en) 2018-11-30 2023-11-07 Snap Inc. Position service to determine relative position to map features
US11558709B2 (en) 2018-11-30 2023-01-17 Snap Inc. Position service to determine relative position to map features
US11698722B2 (en) 2018-11-30 2023-07-11 Snap Inc. Generating customized avatars based on location information
US11877211B2 (en) 2019-01-14 2024-01-16 Snap Inc. Destination sharing in location sharing system
US11751015B2 (en) 2019-01-16 2023-09-05 Snap Inc. Location-based context information sharing in a messaging system
US11294936B1 (en) 2019-01-30 2022-04-05 Snap Inc. Adaptive spatial density based clustering
US11693887B2 (en) 2019-01-30 2023-07-04 Snap Inc. Adaptive spatial density based clustering
US11809624B2 (en) 2019-02-13 2023-11-07 Snap Inc. Sleep detection in a location sharing system
US11500525B2 (en) 2019-02-25 2022-11-15 Snap Inc. Custom media overlay system
US11954314B2 (en) 2019-02-25 2024-04-09 Snap Inc. Custom media overlay system
US11574431B2 (en) 2019-02-26 2023-02-07 Snap Inc. Avatar based on weather
US11301117B2 (en) 2019-03-08 2022-04-12 Snap Inc. Contextual information in chat
US11868414B1 (en) 2019-03-14 2024-01-09 Snap Inc. Graph-based prediction for contact suggestion in a location sharing system
US11852554B1 (en) 2019-03-21 2023-12-26 Snap Inc. Barometer calibration in a location sharing system
US11740760B2 (en) 2019-03-28 2023-08-29 Snap Inc. Generating personalized map interface with enhanced icons
US11249614B2 (en) 2019-03-28 2022-02-15 Snap Inc. Generating personalized map interface with enhanced icons
US11361493B2 (en) 2019-04-01 2022-06-14 Snap Inc. Semantic texture mapping system
US11206615B2 (en) 2019-05-30 2021-12-21 Snap Inc. Wearable device location systems
US11785549B2 (en) 2019-05-30 2023-10-10 Snap Inc. Wearable device location systems
US11606755B2 (en) 2019-05-30 2023-03-14 Snap Inc. Wearable device location systems architecture
US11601783B2 (en) 2019-06-07 2023-03-07 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
US11917495B2 (en) 2019-06-07 2024-02-27 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
US11714535B2 (en) 2019-07-11 2023-08-01 Snap Inc. Edge gesture interface with smart interactions
US11821742B2 (en) 2019-09-26 2023-11-21 Snap Inc. Travel based notifications
US11218838B2 (en) 2019-10-31 2022-01-04 Snap Inc. Focused map-based context information surfacing
US11429618B2 (en) 2019-12-30 2022-08-30 Snap Inc. Surfacing augmented reality objects
US11128715B1 (en) 2019-12-30 2021-09-21 Snap Inc. Physical friend proximity in chat
US11729343B2 (en) 2019-12-30 2023-08-15 Snap Inc. Including video feed in message thread
US11893208B2 (en) 2019-12-31 2024-02-06 Snap Inc. Combined map icon with action indicator
US11943303B2 (en) 2019-12-31 2024-03-26 Snap Inc. Augmented reality objects registry
US11343323B2 (en) 2019-12-31 2022-05-24 Snap Inc. Augmented reality objects registry
US11888803B2 (en) 2020-02-12 2024-01-30 Snap Inc. Multiple gateway message exchange
US11228551B1 (en) 2020-02-12 2022-01-18 Snap Inc. Multiple gateway message exchange
US11516167B2 (en) 2020-03-05 2022-11-29 Snap Inc. Storing data based on device location
US11765117B2 (en) 2020-03-05 2023-09-19 Snap Inc. Storing data based on device location
US11619501B2 (en) 2020-03-11 2023-04-04 Snap Inc. Avatar based on trip
US11776256B2 (en) 2020-03-27 2023-10-03 Snap Inc. Shared augmented reality system
US11915400B2 (en) 2020-03-27 2024-02-27 Snap Inc. Location mapping for large scale augmented-reality
US11430091B2 (en) 2020-03-27 2022-08-30 Snap Inc. Location mapping for large scale augmented-reality
US11314776B2 (en) 2020-06-15 2022-04-26 Snap Inc. Location sharing using friend list versions
US11503432B2 (en) 2020-06-15 2022-11-15 Snap Inc. Scalable real-time location sharing framework
US11483267B2 (en) 2020-06-15 2022-10-25 Snap Inc. Location sharing using different rate-limited links
US11290851B2 (en) 2020-06-15 2022-03-29 Snap Inc. Location sharing using offline and online objects
US11676378B2 (en) 2020-06-29 2023-06-13 Snap Inc. Providing travel-based augmented reality content with a captured image
US11943192B2 (en) 2020-08-31 2024-03-26 Snap Inc. Co-location connection service
US11961116B2 (en) 2020-10-26 2024-04-16 Foursquare Labs, Inc. Determining exposures to content presented by physical objects
CN112987035A (en) * 2021-02-07 2021-06-18 北京中交创新投资发展有限公司 Beidou edge computing equipment and method for acquiring inspection facilities based on equipment
US11601888B2 (en) 2021-03-29 2023-03-07 Snap Inc. Determining location using multi-source geolocation data
US11902902B2 (en) 2021-03-29 2024-02-13 Snap Inc. Scheduling requests for location data
US11606756B2 (en) 2021-03-29 2023-03-14 Snap Inc. Scheduling requests for location data
US11645324B2 (en) 2021-03-31 2023-05-09 Snap Inc. Location-based timeline media content system
CN113654548A (en) * 2021-07-16 2021-11-16 北京百度网讯科技有限公司 Positioning method, positioning device, electronic equipment and storage medium
US11829834B2 (en) 2021-10-29 2023-11-28 Snap Inc. Extended QR code
US11962645B2 (en) 2022-06-02 2024-04-16 Snap Inc. Guided personal identity based actions
US11963105B2 (en) 2023-02-10 2024-04-16 Snap Inc. Wearable device location systems architecture
US11961196B2 (en) 2023-03-17 2024-04-16 Snap Inc. Virtual vision system

Also Published As

Publication number Publication date
EP2673766B1 (en) 2017-08-09
US8488011B2 (en) 2013-07-16
EP2673766A4 (en) 2015-03-11
EP2673766A1 (en) 2013-12-18
US20130307873A1 (en) 2013-11-21
CN103635954A (en) 2014-03-12
CN103635954B (en) 2016-05-25
US8953054B2 (en) 2015-02-10
WO2012109182A1 (en) 2012-08-16

Similar Documents

Publication Publication Date Title
US8953054B2 (en) System to augment a visual data stream based on a combination of geographical and visual information
US8392450B2 (en) System to augment a visual data stream with user-specific content
EP2673737B1 (en) A system for the tagging and augmentation of geographically-specific locations using a visual data stream
US8447329B2 (en) Method for spatially-accurate location of a device using audio-visual information
US9530251B2 (en) Intelligent method of determining trigger items in augmented reality environments
US9338589B2 (en) User-generated content in a virtual reality environment
US9691184B2 (en) Methods and systems for generating and joining shared experience
US9064326B1 (en) Local cache of augmented reality content in a mobile computing device
US9183546B2 (en) Methods and systems for a reminder servicer using visual recognition
US20120265328A1 (en) Methods and systems for generating frictionless social experience environment

Legal Events

Date Code Title Description
AS Assignment

Owner name: ANTONOMY CORPORATION LTD, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BLANCHFLOWER, SEAN MARK;LYNCH, MICHAEL RICHARD;REEL/FRAME:025794/0288

Effective date: 20110207

AS Assignment

Owner name: LONGSAND LIMITED, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AUTONOMY CORPORATION LIMITED;REEL/FRAME:030009/0469

Effective date: 20110928

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: AURASMA LIMITED, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LONGSAND LIMITED;REEL/FRAME:037022/0547

Effective date: 20151021

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AURASMA LIMITED;REEL/FRAME:047489/0451

Effective date: 20181011

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8