Look into Wait for file stage or external file sniffing capabilities in enterprise schedulers. Read and write to two locations. One to your actual destination and the other to the archive folder. Have a cleanup job run every day check for file create date and anything over 30 days, get rid of it.
The complete row is lost???
That is a problem. Look at your partitioning method. If the row is there with count > 1, it is getting filtered out and thats ok.
Open the file with excel. It accepts tab delimited files. see if it opens up the file properly. If it doesn't, that means the file is not properly delimited. Lets start there.
Is that join going to reduce the data???
If yes, then you can load the data from the "other" database into your current database that houses your source. Pass a sql join and extract.
Restrict it in sql. Based on your example, you want everything from T1 which is not there in T2, deduped.
Do a 'not exists' clause and run the output through remove duplicate stage.