Drawing curves is a fundamental task in mid-air interactive applications such as 3D sketching, geometric modeling, hand-writing recognition, and authentication. Existing research in mid-air drawing is solely focused on determining what the user drew assuming that the intended curve is segmented from the continuous user-generated trajectory. In this work, our aim is to address the complementary problem: to determine when the user actually intended to draw without the use of any prescribed gestures or hand-held controllers (e.g., Wii remote, HTC Vive). In our previously published work, we demonstrated that in mid-air drawing tasks, not only it is possible to statistically learn drawing intent from hand motion, but it is also perceived to be more natural by users. Our idea was to simply classify each instance of hand trajectories as either a stroke or a hover. Our current work investigates new representations of the users’ motion beyond a single point (such as a tracked palm) to richer multi-point trajectories obtained with other skeletal joints such as wrist and elbow. We trained several binary classifiers on five such trajectory representations obtained from 3D drawing data from 25 users using a hand tracking device. We compare these representations and the corresponding classifiers for predicting user intent for mid-air drawing. Our extended approach resulted in improved prediction accuracy (mean: , min: , max: ) with respect to our earlier work (mean: , min: , max: ).