05-26-2016 04:08 PM
i am working on motor insurance data set , want to create a model to payment by thirdparty based on input values no. of claims, insured, area ,etc...
this data set contains 2000, records. and i want to model , total amount to pay for insurance comapny.
my question is:
ón this size of data , is it necessary to partion data in to training and validation,
05-27-2016 03:27 AM
I'm no data miner, so I can't answer for the statistical process part.
But 2' records are ridiculously small, so I can't imagine that you should encounter any performance issues.
Have you tried and got into problems?