Amit Bhardwaj

Dr. Amit Bhardwaj

Associate Professor,
Department of Electrical Engineering,

IIT Jodhpur, Rajasthan, India - 342037

On Going Projects: Principal Investigator

  1. Data-driven Haptic Modeling and Rendering of Normal Interactions on Inhomogeneous Viscoelastic Deformable Objects [Dec. 2020- Dec.2022] (Funding Agency: DST SERB, 33 Lacs).
  2. The proposed project aims to identify computationally efficient mechanisms for haptic modeling and rendering of a complex phenomenon - inhomogeneous viscoelasticity in the virtual environment. The experimental setup that would be developed for this project, may also be used to model and render other complex phenomena of viscoelastic/plastic deformable objects. The estimated results of the project and the future directions will strengthen our efforts in creating realistic (or immersive) and high definition virtual environments which are very much required in various applications of virtual and augmented reality. For example, the findings of the work will contribute significantly in developing medical training simulators in the virtual environment. The proposed research activities will also be helpful in creating immersive digital museums, thus may be used for re-creating and preserving our heritage in the virtual environment. The proposed data-driven modeling and rendering approach may also be employed to interact with distant environments, thus having possible applications in the domain of telemedicine/telesurgery/telediagnostics.

  3. Telepresence and Teleaction System for Robot Assisted Dentistry [Dec. 2021- Dec. 2024] (Funding agency: IHFC IIT Delhi, Amount: 129 Lacs)
  4. In recent times, telepresence and teleaction (TPTA) systems have gained a lot of attention in the research community because of their many applications in various fields. TPTA systems allow a user to be present and active in a remote environment. Telemedicine (i.e., tele-diagnostics as well as tele-surgery) is one of its main applications. Dentists always work in a very close proximity with the patients as most of the tasks involve the region inside the patient's mouth. Maintaining social distance and making medical services accessible to remote places being the main motivation of this project, we propose to place robots at an intermediate state between the doctor and the patient to avoid direct human involvement. Different tasks can be executed with the help of the robot operated by a human from a distance (i.e., through tele-operation). To perform dental procedures like (scaling, drilling and root canal cavity), the doctor needs to be seated next to the patient. We propose to use a tele-operated robotic manipulator, equipped with necessary tools. However, these tasks require precise force feedback and position control of the manipulator. A force/torque sensor needs to be mounted at the end-effector to measure the contact interaction force. A haptic device would enable the doctor to act based on the feedback. Mouth-endoscope (used for getting 3D pictures of the mouth with teeth, gum details, etc.,) of a remote patient is also necessary while doing the tele-operation task. In the proposed system, all-three modalities (audio-haptic-visuo) will be transmitted to the operator (doctor) in real time for efficient interaction between the operator and teleoperator.

  5. Haptic Camera: Inter-relationship between Different Sensing Modalities for Textured Surfaces [March. 2022- March. 2025] (Funding Agency: Seed Grant IIT Jodhpur, Amount: 21.43 Lacs)
  6. Material identification and retrieval is an emerging research area in the fields of robotics and human-machine/computer interaction. In order to provide robots human-like abilities to characterize and identify materials, multi-modal information (audio, vision and haptics) should be processed by intelligent algorithms. For example, authors have designed material classification systems based on novel features of all three sensing modalities. However, these studies have not investigated the relative importance of sensing modalities in the classification task. In addition, it is not known fully how one sensing modality affects the other. Unlike audio and vision, the technologies for storage, retrieval and compression in the field of haptics have not matured enough. So, if inter-relationships between sensing modalities are known, we may use the advanced state-of-the-art technologies of audio and vision for extracting the haptic information of a material. This information may be further used for re-creating the haptic sense of a real material in a virtual environment. In other words, if the inter-relationship between different modalities is known, we may extract the sense of touch for many unknown objects whose image or audio information is available. This work focuses on finding the said inter-relationship between different sensing modalities.

Completed Projects

  1. Design and Analysis of Predictive Haptic Signals.

    In this work, we seek to identify good adaptive sampling strategies for haptic signals. Our approach relies on experiments wherein we record the response of several users to haptic stimuli. We then learn different classifiers to predict the user response based on a variety of causal signal features. The classifiers that have good prediction accuracy serve as possible candidates to be used in adaptive sampling. We compare the resultant adaptive samplers based on their rate-distortion tradeoff using synthetic as well as natural data. For classifi cation, we use classifiers based on level crossings and Weber’s law, and also random forests using a variety of causal signal features. The random forest typically yields the best prediction accuracy and a study of the importance of variables suggests that the level crossings and Weber’s classifier features are the most dominant. The classifiers based on level crossings and Weber’s law have good accuracy (more than 90%) and are only marginally inferior to random forests. Given their simple parametric form, the level crossings and Weber's law based classifiers are good candidates to be used for adaptive sampling.

  2. Deadzone analysis of 2 D Kinesthetic Perception.

    In the next study, we extend our first work to 2-D kinesthetic haptic signals, and study the possible structures of perceptual deadzones (which separates out per ceptually significant and insignificant data), defined for the adaptive sampling schemes. We again study the Weber, level crossing and general purpose classi fiers for the purpose. We find that the level crossing classifier gives a significant improvement over the Weber classifier. The level crossings classifier assumes a circular deadzone around the reference vector, and the radius of the deadzone is independent of the magnitude of the reference vector. In order to study the directional sensitivity of the haptic perception, we modify the standard level crossing classifier to have a general shape as defined by a conic section and estimate the parameters of this conic section, the result of which demonstrates that kinesthetic perception is indeed circularly symmetric, and is independent of direction. Hence, a user does not have directional preference while perceiving the change in 2-D haptic force.

  3. Does Just Noticeable Difference Depend on the Rate of Change of Kinesthetic Stimulus.

    In the previous studies on perceptual adaptive sampling, it is not investigated how the just noticeable difference for the kinesthetic force stimulus is affected by the rate of temporal change of the stimulus. The perceptual limitations of a human being are not fully exploited by the fixed JND. For example, if the signal changes very slowly, it is difficult for a user to react to the change, and when the change is too quick, the user may not respond because of the human response time (minimum time required for reacting to a change). Thus, the fixed JND will contribute inessential packets for such kind of signals. Hence, in this work, we attempt to examine the relationship between the JND and the rate of change of the force stimulus. For this purpose, we design an experiment where a user is exposed to a linearly increasing/decreasing haptic force stimulus, andis asked to react to the change. Our results show that the JND decreases for a faster change in the force stimuli. We also exhibit that there is an asymmetric behavior of perception between the increasing and the decreasing force stimuli. Hence, the findings of the work have a feasible relevance in better design of a haptic data compression algorithm.

  4. Estimation of Resolvability of User Response in Kinesthetic Perception of Jump Discontnuities.

    In this work, we estimate the temporal resolution of a user - the minimum time spacing required in perceiving two consecutive jumps in a kinesthetic force stimulus. In a teleoperation, perceptually significant force samples are transmitted from a robot to a human operator. If the time spacing between two consecu tive, perceptually sampled kinesthetic force stimuli is less than the minimum time spacing (temporal resolution) required in perceiving the jump discontinuity, then the second force stimulus will not be perceived even if it is well above the just noticeable difference. Hence, there is no need to transmit the second force sample to the operator. As a matter of fact the teleoperator needs to slow down the operation. Thus, for the transmission in a teleoperation, the tempo ral resolution needs also to be considered while effecting perceptually adaptive sampling. In this work, we propose a statistical method to estimate the tem poral resolution and show that the temporal resolution lies between 20-30 ms for most users.

  5. Sequential Effect for Force Perception.

    In the literature on psychophysics, it is reported that during a psychophysical experiment when a user is subjected to many trials in succession, the perception of the current trial is observed to be overly similar to the previous trial (assim ilation effect), and is observed to be dissimilar to distantly past trials (contrast effect). Overall, this behavior is called the sequential effect and is a very well established phenomenon in psychophysics. In the literature, the sequential ef- fect has been demonstrated on loudness of sound, and has been further assumed for other perceptual modalities like haptics and vision. However, to the best of our knowledge, we have not found any experimental study either claiming its existence for force perception or for quantifying the effect. This motivates us to study the sequential effect for force perception. In this work, we take up this study and find out whether or not the sequential effect exists, and how to quantify the effect. In order to study the presence of sequential effect, we design an experimental setup where a user is subjected to a series of random force stimuli. We record the responses for several users. Thereafter, a logis tic regression model is employed to observe how much the recorded responses are affected by the past stimuli. Based on the results of the logistic regres sion model, we demonstrate the presence of sequential effect for the kinesthetic stimuli. We also explain how to quantify the duration over which the sequential effect persists.

  6. A Candidate Hardware and Software Reference Setup for Kinesthetic Codec Standardization.

    Recently, the IEEE P1918.1 standardization activity has been initiated for defining a framework for the Tactile Internet. Within this activity, IEEE P1918.1.1 is a task group for the standardization of Haptic Codecs for the Tactile Internet. Primary goal of the task group is to define/develop codecs for both closed-loop (kinesthetic information exchange) and open-loop (tactile information exchange) communications. In this work, we propose a reference hardware and software setup for the evaluation of kinesthetic codecs. The setup defines a typical teleoperation scenario in a virtual environment for the real ization of closed-loop kinesthetic interactions. For the installation and testing of the setup, we provide detailed guidelines in the current work. The work also provides sample data traces for both static and dynamic kinesthetic inter actions. These data traces may be used for preliminary testing of kinesthetic codecs.

  7. On the Minimum Perceptual Temporal Video Sampling Rate and its Application to Adaptive Frame Skipping.

    Media technology, in particular video recording and playback, keeps improving to provide users with high-quality real and virtual visual content. In recent years, increasing the temporal sampling rate of videos and the refresh rate of displays has become one focus of technical innovation. This raises the question, how high the sampling and refresh rates should be? To answer this question, we determine the minimum temporal sampling rate at which a video should be presented to make temporal sampling imperceptible to viewers. Through a psychophysical study, we find that this minimum sampling rate depends on both the speed of the objects in the image plane, and the exposure time of the recording camera. We propose a model to compute the required minimum sampling rate based on these two parameters. In addition, state-of-theart video codecs employ motion vectors from which the local object movement speed can be inferred. Therefore, we present a procedure to compute the minimum sampling rate given an encoded video and camera exposure time. Since the object motion speed in a video may vary, the corresponding minimum frame rate is also varying. This is why the results of this work are particularly applicable when used together with adaptive frame rate computer generated graphics or novel video communication solutions that drop insignificant frames. In our experiments, we show that videos played back at the minimum adaptive frame rate achieve an average bit rate reduction of 26 % compared to constant frame rate playback, while perceptually no difference can be observed

  8. Learning-Based Modular Task-Oriented Grasp Stability Assessment.

    Assessing grasp stability is essential to prevent the failure of robotic manip ulation tasks due to sensory data and object uncertainties. Learning-based approaches are widely deployed to infer the success of a grasp. Typically, the underlying model used to estimate the grasp stability is trained for a specific task, such as lifting, hand-over, or pouring. Since every task has individual stability demands, it is important to adapt the trained model to new manip ulation actions. If the same trained model is directly applied to a new task, unnecessary grasp adaptations might be triggered, or in the worst case, the manipulation might fail. To address this issue, we divide the manipulation task used for training into seven sub-tasks, defined as modular tasks. We deploy a learning-based approach and assess the stability for each modular task sep arately. We further propose analytical features to reduce the dimensionality and the redundancy of the tactile sensor readings. A main task can thereby berepresented as a sequence of relevant modular tasks. The stability prediction of the main task is computed based on the inferred success labels of the modular tasks. Our experimental evaluation shows that the proposed feature set lowers the prediction error up to 5.69 % compared to other sets used in state-of-the- art methods. Robotic experiments demonstrate that our modular task-oriented stability assessment avoids unnecessary grasp force adaptations and regrasps for various manipulation tasks.

  9. Automatic Transfer of Musical Mood into Virtual Environments.

    This work presents a method that automatically transforms a virtual environ ment (VE) according to the mood of input music. We use machine learning to extract a mood from the music. We then select images exhibiting the mood and transfer their styles to the textures of objects in the VE photorealistically or artistically. Our user study results indicate that our method is effective in transferring valence-related aspects, but not arousal-related ones. Our method can still provide novel experiences in virtual reality and speed up the production of VEs by automating its procedure.

  10. Data-Driven Haptic Modeling and Rendering of Viscoelastic Deformable Objects Using Random Forest Regression.

    In this work, we propose a new data-driven approach for haptic modeling and rendering of homogeneous viscoelastic objects. The approach is based on a well known machine learning technique: random forest. Here we employ random forest for regression, not for classification. We acquire discrete-time interaction data for many automated cyclic compressions of a deformable object. Random forest is trained to estimate a nonparametric relationship between the position and response forces. We consider only four extremum interactions for training the forest. Our hypothesis is that a random forest model trained on extremum interactions may estimate the response forces for unseen interactions. Our re sults show that a model trained with just 10 % of the training data is capable of modeling unseen interactions with good accuracy, thus validating our hypoth esis. Subsequently, we employ the trained model for haptic rendering. When implemented on CPU, the model simulates the force feedback at an update rate faster than 1 kHz. Thus, the proposed approach provides very promising results for both modeling and rendering. In addition, the approach makes the data acquisition task simple as it needs only four extremum interactions for generalization of the results.