I don't believe passing the Turing test (you can't distinguish a machine from a human when speaking to them) is equal to being sentient. The robot might just be good at mimicking language and all of its features, whether grammar or vocabulary. It might even be good at correlating responses with what prompts them, but this is something it can learn by studying humans speak (recognizing and replicating patterns). However, these would not be the representation of its 'thoughts', i.e. not its words, just the most observed type of human response it replicated. Much like how Raymott described it.
I'm sure that robots will be able to display appropriate emotional behaviour. They will be able to frown, smile, maybe even cry if some robotics engineer wants to install a water reservoir in their 'eyes'. But I doubt whether they will feel anything. Feeling requires a body, not just cognition.
I don't think, however, that 'feeling' requires a physical body at all. A fully virtual entity is also capable of processing input information and being motivated by the data it analyzes. The fact our emotions are triggered by a wide variety of neurotransmitters doesn't make it any special; artifical intelligence can also operate in distinctive states we call emotions, even though they are triggered by electrical signals. It can also correlate these states with what causes them to evolve long-term relation or preemptive responses, e.g. 'fear' what causes harm, and 'love' what is important for it.
It does not mean the emotions are any less real if the entity does not have a physical, biological body, or that its various, distinctive modes it operates in are triggered by electrical signals rather than chemical ones.