Amit Bhardwaj

Dr. Amit Bhardwaj

Assistant Professor,
Department of Electrical Engineering,

IIT Jodhpur, Rajasthan, India - 342037

First of all, thanks for visiting my home page. I work in the field of haptics- a study of sense of touch. Unlike our other senses like vision and hearing, smell and taste, the sense of touch (i.e., haptics) remained unexplored for a long time. Since last three decades, it has emerged as an active area of research for the scientific community. In the haptics literature, broadly, there is a mention of three types of haptics - human haptics, machine haptics and computer haptics. When a user touches an object, the interaction information is conveyed to the brain through the sensory system. The brain computes the necessary force which is given back to activate the muscles for the hand or arm movements. Human haptics deals with the study of this human sensorimotor loop and issues related to human perception of the sense of touch. Machine haptics deals with the study of design and construction of electro-mechanical devices which can effectively replace or augment human touch. In order to augment touch in a virtual environment, it is needed to design and develop algorithms and software which can compute interaction forces and simulate physical properties of the virtual objects. Computer haptics deals with all the related aspects. Hence, research in haptics is multidisciplinary in nature, involving control, robotics, computer science/engineering, psychophysics, neurophysiology and human motor control. There are various interesting and challenging applications of haptics, especially for teleoperation applications like in medical surgery, performing hazardous tasks and exploring space activities using a remotely controlled robot.

I am primarily interested in computer haptics and human haptics. My work is also closely associated with the fields of augmented/virtual reality, human computer interaction and robotics. I am looking for hardworking and motivated masters and PhD students. Students who want to contribute to a new research field like haptics, feel free to contact me any time.

I have undertaken the following research projects.
  1. Design and Analysis of Predictive Haptic Signals.

    In this work, we seek to identify good adaptive sampling strategies for haptic signals. Our approach relies on experiments wherein we record the response of several users to haptic stimuli. We then learn different classifiers to predict the user response based on a variety of causal signal features. The classifiers that have good prediction accuracy serve as possible candidates to be used in adaptive sampling. We compare the resultant adaptive samplers based on their rate-distortion tradeoff using synthetic as well as natural data. For classifi cation, we use classifiers based on level crossings and Weber’s law, and also random forests using a variety of causal signal features. The random forest typically yields the best prediction accuracy and a study of the importance of variables suggests that the level crossings and Weber’s classifier features are the most dominant. The classifiers based on level crossings and Weber’s law have good accuracy (more than 90%) and are only marginally inferior to random forests. Given their simple parametric form, the level crossings and Weber's law based classifiers are good candidates to be used for adaptive sampling.

  2. Deadzone analysis of 2 D Kinesthetic Perception.

    In the next study, we extend our first work to 2-D kinesthetic haptic signals, and study the possible structures of perceptual deadzones (which separates out per ceptually significant and insignificant data), defined for the adaptive sampling schemes. We again study the Weber, level crossing and general purpose classi fiers for the purpose. We find that the level crossing classifier gives a significant improvement over the Weber classifier. The level crossings classifier assumes a circular deadzone around the reference vector, and the radius of the deadzone is independent of the magnitude of the reference vector. In order to study the directional sensitivity of the haptic perception, we modify the standard level crossing classifier to have a general shape as defined by a conic section and estimate the parameters of this conic section, the result of which demonstrates that kinesthetic perception is indeed circularly symmetric, and is independent of direction. Hence, a user does not have directional preference while perceiving the change in 2-D haptic force.

  3. Does Just Noticeable Difference Depend on the Rate of Change of Kinesthetic Stimulus.

    In the previous studies on perceptual adaptive sampling, it is not investigated how the just noticeable difference for the kinesthetic force stimulus is affected by the rate of temporal change of the stimulus. The perceptual limitations of a human being are not fully exploited by the fixed JND. For example, if the signal changes very slowly, it is difficult for a user to react to the change, and when the change is too quick, the user may not respond because of the human response time (minimum time required for reacting to a change). Thus, the fixed JND will contribute inessential packets for such kind of signals. Hence, in this work, we attempt to examine the relationship between the JND and the rate of change of the force stimulus. For this purpose, we design an experiment where a user is exposed to a linearly increasing/decreasing haptic force stimulus, andis asked to react to the change. Our results show that the JND decreases for a faster change in the force stimuli. We also exhibit that there is an asymmetric behavior of perception between the increasing and the decreasing force stimuli. Hence, the findings of the work have a feasible relevance in better design of a haptic data compression algorithm.

  4. Estimation of Resolvability of User Response in Kinesthetic Perception of Jump Discontnuities.

    In this work, we estimate the temporal resolution of a user - the minimum time spacing required in perceiving two consecutive jumps in a kinesthetic force stimulus. In a teleoperation, perceptually significant force samples are transmitted from a robot to a human operator. If the time spacing between two consecu tive, perceptually sampled kinesthetic force stimuli is less than the minimum time spacing (temporal resolution) required in perceiving the jump discontinuity, then the second force stimulus will not be perceived even if it is well above the just noticeable difference. Hence, there is no need to transmit the second force sample to the operator. As a matter of fact the teleoperator needs to slow down the operation. Thus, for the transmission in a teleoperation, the tempo ral resolution needs also to be considered while effecting perceptually adaptive sampling. In this work, we propose a statistical method to estimate the tem poral resolution and show that the temporal resolution lies between 20-30 ms for most users.

  5. Sequential Effect for Force Perception.

    In the literature on psychophysics, it is reported that during a psychophysical experiment when a user is subjected to many trials in succession, the perception of the current trial is observed to be overly similar to the previous trial (assim ilation effect), and is observed to be dissimilar to distantly past trials (contrast effect). Overall, this behavior is called the sequential effect and is a very well established phenomenon in psychophysics. In the literature, the sequential ef- fect has been demonstrated on loudness of sound, and has been further assumed for other perceptual modalities like haptics and vision. However, to the best of our knowledge, we have not found any experimental study either claiming its existence for force perception or for quantifying the effect. This motivates us to study the sequential effect for force perception. In this work, we take up this study and find out whether or not the sequential effect exists, and how to quantify the effect. In order to study the presence of sequential effect, we design an experimental setup where a user is subjected to a series of random force stimuli. We record the responses for several users. Thereafter, a logis tic regression model is employed to observe how much the recorded responses are affected by the past stimuli. Based on the results of the logistic regres sion model, we demonstrate the presence of sequential effect for the kinesthetic stimuli. We also explain how to quantify the duration over which the sequential effect persists.

  6. A Candidate Hardware and Software Reference Setup for Kinesthetic Codec Standardization.

    Recently, the IEEE P1918.1 standardization activity has been initiated for defining a framework for the Tactile Internet. Within this activity, IEEE P1918.1.1 is a task group for the standardization of Haptic Codecs for the Tactile Internet. Primary goal of the task group is to define/develop codecs for both closed-loop (kinesthetic information exchange) and open-loop (tactile information exchange) communications. In this work, we propose a reference hardware and software setup for the evaluation of kinesthetic codecs. The setup defines a typical teleoperation scenario in a virtual environment for the real ization of closed-loop kinesthetic interactions. For the installation and testing of the setup, we provide detailed guidelines in the current work. The work also provides sample data traces for both static and dynamic kinesthetic inter actions. These data traces may be used for preliminary testing of kinesthetic codecs.

  7. On the Minimum Perceptual Temporal Video Sampling Rate and its Application to Adaptive Frame Skipping.

    Media technology, in particular video recording and playback, keeps improving to provide users with high-quality real and virtual visual content. In recent years, increasing the temporal sampling rate of videos and the refresh rate of displays has become one focus of technical innovation. This raises the question, how high the sampling and refresh rates should be? To answer this question, we determine the minimum temporal sampling rate at which a video should be presented to make temporal sampling imperceptible to viewers. Through a psychophysical study, we find that this minimum sampling rate depends on both the speed of the objects in the image plane, and the exposure time of the recording camera. We propose a model to compute the required minimum sampling rate based on these two parameters. In addition, state-of-theart video codecs employ motion vectors from which the local object movement speed can be inferred. Therefore, we present a procedure to compute the minimum sampling rate given an encoded video and camera exposure time. Since the object motion speed in a video may vary, the corresponding minimum frame rate is also varying. This is why the results of this work are particularly applicable when used together with adaptive frame rate computer generated graphics or novel video communication solutions that drop insignificant frames. In our experiments, we show that videos played back at the minimum adaptive frame rate achieve an average bit rate reduction of 26 % compared to constant frame rate playback, while perceptually no difference can be observed

  8. Learning-Based Modular Task-Oriented Grasp Stability Assessment.

    Assessing grasp stability is essential to prevent the failure of robotic manip ulation tasks due to sensory data and object uncertainties. Learning-based approaches are widely deployed to infer the success of a grasp. Typically, the underlying model used to estimate the grasp stability is trained for a specific task, such as lifting, hand-over, or pouring. Since every task has individual stability demands, it is important to adapt the trained model to new manip ulation actions. If the same trained model is directly applied to a new task, unnecessary grasp adaptations might be triggered, or in the worst case, the manipulation might fail. To address this issue, we divide the manipulation task used for training into seven sub-tasks, defined as modular tasks. We deploy a learning-based approach and assess the stability for each modular task sep arately. We further propose analytical features to reduce the dimensionality and the redundancy of the tactile sensor readings. A main task can thereby berepresented as a sequence of relevant modular tasks. The stability prediction of the main task is computed based on the inferred success labels of the modular tasks. Our experimental evaluation shows that the proposed feature set lowers the prediction error up to 5.69 % compared to other sets used in state-of-the- art methods. Robotic experiments demonstrate that our modular task-oriented stability assessment avoids unnecessary grasp force adaptations and regrasps for various manipulation tasks.

  9. Automatic Transfer of Musical Mood into Virtual Environments.

    This work presents a method that automatically transforms a virtual environ ment (VE) according to the mood of input music. We use machine learning to extract a mood from the music. We then select images exhibiting the mood and transfer their styles to the textures of objects in the VE photorealistically or artistically. Our user study results indicate that our method is effective in transferring valence-related aspects, but not arousal-related ones. Our method can still provide novel experiences in virtual reality and speed up the production of VEs by automating its procedure.

  10. Data-Driven Haptic Modeling and Rendering of Viscoelastic Deformable Objects Using Random Forest Regression.

    In this work, we propose a new data-driven approach for haptic modeling and rendering of homogeneous viscoelastic objects. The approach is based on a well known machine learning technique: random forest. Here we employ random forest for regression, not for classification. We acquire discrete-time interaction data for many automated cyclic compressions of a deformable object. Random forest is trained to estimate a nonparametric relationship between the position and response forces. We consider only four extremum interactions for training the forest. Our hypothesis is that a random forest model trained on extremum interactions may estimate the response forces for unseen interactions. Our re sults show that a model trained with just 10 % of the training data is capable of modeling unseen interactions with good accuracy, thus validating our hypoth esis. Subsequently, we employ the trained model for haptic rendering. When implemented on CPU, the model simulates the force feedback at an update rate faster than 1 kHz. Thus, the proposed approach provides very promising results for both modeling and rendering. In addition, the approach makes the data acquisition task simple as it needs only four extremum interactions for generalization of the results.