I develop an application in which i constantly get samples of heart pulse.

I defined an interval of t seconds.

In each t seconds I have n samples.

In every interval, I want to calculate the tendency of those n samples. For example, lets say I have n = 5, and I have samples with values {70, 88, 95, 103, 115}. I want to recognize I have a growth in the heart pulse, and I want to have some measure for the rate of grows/decrease/almost no change.

I thought on two approaches for solving this problem.

  1. I calculate the linear approximation for the n samples using linear regression by applying least squares implemented by the normal equations. (I treat each sample as two coordinates vector with x coordinate as time and y coordinate as heart pulse). I get from the normal equations a linear function of the form y = mx+b and my measure for the tendency is the slope, i.e. the m value.

  2. The second approach is to calculate the correlation (pearson's correlation) between vector x and vector y when x is the time and y is the heart pulse.

I'm asking which approach do you think is better for my problem (determining the tendency of heart pulses). Or, do you have a better algorithm for solving this problem?

asked May 18 '13 at 00:16

George3141's gravatar image

George3141
1112

edited May 18 '13 at 00:20

Be the first one to answer this question!
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.