From 76f7b7a98375c90c1fd008f461350484e97843ae Mon Sep 17 00:00:00 2001 From: babykav Date: Mon, 17 Mar 2014 03:06:12 +0000 Subject: [PATCH 1/5] created my pipes exercises post. It's dope --- _posts/2014-03-16-pipesexercisesesbabykav | 69 +++++++++++++++++++++++ 1 file changed, 69 insertions(+) create mode 100644 _posts/2014-03-16-pipesexercisesesbabykav diff --git a/_posts/2014-03-16-pipesexercisesesbabykav b/_posts/2014-03-16-pipesexercisesesbabykav new file mode 100644 index 0000000..1ce58a2 --- /dev/null +++ b/_posts/2014-03-16-pipesexercisesesbabykav @@ -0,0 +1,69 @@ +--- +layout: post +author: ethan +date: 2014-03-16 +title: Pipes and Stuffs +--- + +##Process +So doing these exercises I had to do them a couple of times to fully understand what they were doing. I also had to re-read some of the things to really understand what the heck was going on too. One thing that I got stuck on was sorting. Whenever I would sort and then try to get the head, it would always give me something that I wasn't expecting. Then after re-reading and trying to do EVERYTHING over again at least twice, I figured out I was forgetting to add the -n to make it sort numerically. Sorting it numerically gave me the right output. + +##Number 1 +sorting with -n sorts it numerically. + +##Number 2 +they both count the number of lines. The difference is the "<". The "<" tells it to read from a file instead from standard input. The first one tells it to read from the file and the second one tells it to read from the standard input and then give the output of line numbers. + +##Number 3 +I think that it only removes adjacent duplicates for a couple of different reasons. +1) It only removes them becasue people might have duplicates for a reason. Sometimes adjacent entries could mean that one of the entries was an accident. So removing entries that are next to each other removes one of the mistakes. Othe entries that aren't beside each other could mean that they are there for a reason. +2) It could just take too long for it to try to find other duplicates that aren't adjacent. There is a ton of data in these things and it could take too much time and energy to find repititions that aren't right beside each other. +3) Sometimes the system can make copies on accident and the same file just shows up. It doesn't need to be there that many times and can be deleted to save space. +4) To maybe get it to where it can find all of the duplicates you can compine it with this code: "sort salmon.txt | uniq". doing this will reorder and sort the list to put all of the duplicates so that it can actually find all of the duplicates that are right next to each other. + +##Number 4 +Okay so this took a little thinking, but I think I've got it. + +We started with this file: animals.txt + +``` +2012-11-05,deer +2012-11-05,rabbit +2012-11-05,raccoon +2012-11-06,rabbit +2012-11-06,deer +2012-11-06,fox +2012-11-07,rabbit +2012-11-07,bear +``` +head -5 gives us the first 5 lines of animals.txt + +``` +2012-11-05,deer +2012-11-05,rabbit +2012-11-05,raccoon +2012-11-06,rabbit +2012-11-06,deer +``` + +Then tail -3 tells us to get the last three lines of that + +``` +2012-11-05,raccoon +2012-11-06,rabbit +2012-11-06,deer +``` + +then sort -r tells us to reverse that order + +``` +2012-11-06,deer +2012-11-06,rabbit +2012-11-05,raccoon +``` + +and then that saves in the file final.txt + +##Number 5 +cut -d , -f 2 animals.txt | sort | uniq +this would sort it and then it would remove any duplicates. \ No newline at end of file From 613e6e4f015a83d6753fff487840e91fb8e9edc6 Mon Sep 17 00:00:00 2001 From: babykav Date: Sat, 22 Mar 2014 02:09:58 +0000 Subject: [PATCH 2/5] finally sending my pipes exercise in. figured out how to make a branch for it --- _posts/2014-03-16-pipesexercisesesbabykav.md | 69 ++++++++++++++++++++ 1 file changed, 69 insertions(+) create mode 100644 _posts/2014-03-16-pipesexercisesesbabykav.md diff --git a/_posts/2014-03-16-pipesexercisesesbabykav.md b/_posts/2014-03-16-pipesexercisesesbabykav.md new file mode 100644 index 0000000..1ce58a2 --- /dev/null +++ b/_posts/2014-03-16-pipesexercisesesbabykav.md @@ -0,0 +1,69 @@ +--- +layout: post +author: ethan +date: 2014-03-16 +title: Pipes and Stuffs +--- + +##Process +So doing these exercises I had to do them a couple of times to fully understand what they were doing. I also had to re-read some of the things to really understand what the heck was going on too. One thing that I got stuck on was sorting. Whenever I would sort and then try to get the head, it would always give me something that I wasn't expecting. Then after re-reading and trying to do EVERYTHING over again at least twice, I figured out I was forgetting to add the -n to make it sort numerically. Sorting it numerically gave me the right output. + +##Number 1 +sorting with -n sorts it numerically. + +##Number 2 +they both count the number of lines. The difference is the "<". The "<" tells it to read from a file instead from standard input. The first one tells it to read from the file and the second one tells it to read from the standard input and then give the output of line numbers. + +##Number 3 +I think that it only removes adjacent duplicates for a couple of different reasons. +1) It only removes them becasue people might have duplicates for a reason. Sometimes adjacent entries could mean that one of the entries was an accident. So removing entries that are next to each other removes one of the mistakes. Othe entries that aren't beside each other could mean that they are there for a reason. +2) It could just take too long for it to try to find other duplicates that aren't adjacent. There is a ton of data in these things and it could take too much time and energy to find repititions that aren't right beside each other. +3) Sometimes the system can make copies on accident and the same file just shows up. It doesn't need to be there that many times and can be deleted to save space. +4) To maybe get it to where it can find all of the duplicates you can compine it with this code: "sort salmon.txt | uniq". doing this will reorder and sort the list to put all of the duplicates so that it can actually find all of the duplicates that are right next to each other. + +##Number 4 +Okay so this took a little thinking, but I think I've got it. + +We started with this file: animals.txt + +``` +2012-11-05,deer +2012-11-05,rabbit +2012-11-05,raccoon +2012-11-06,rabbit +2012-11-06,deer +2012-11-06,fox +2012-11-07,rabbit +2012-11-07,bear +``` +head -5 gives us the first 5 lines of animals.txt + +``` +2012-11-05,deer +2012-11-05,rabbit +2012-11-05,raccoon +2012-11-06,rabbit +2012-11-06,deer +``` + +Then tail -3 tells us to get the last three lines of that + +``` +2012-11-05,raccoon +2012-11-06,rabbit +2012-11-06,deer +``` + +then sort -r tells us to reverse that order + +``` +2012-11-06,deer +2012-11-06,rabbit +2012-11-05,raccoon +``` + +and then that saves in the file final.txt + +##Number 5 +cut -d , -f 2 animals.txt | sort | uniq +this would sort it and then it would remove any duplicates. \ No newline at end of file From 918f16ddeb67c117864b57112b52776c82e89824 Mon Sep 17 00:00:00 2001 From: babykav Date: Sat, 22 Mar 2014 02:11:51 +0000 Subject: [PATCH 3/5] deleted a messed up one --- _posts/2014-03-16-pipesexercisesesbabykav | 69 ----------------------- 1 file changed, 69 deletions(-) delete mode 100644 _posts/2014-03-16-pipesexercisesesbabykav diff --git a/_posts/2014-03-16-pipesexercisesesbabykav b/_posts/2014-03-16-pipesexercisesesbabykav deleted file mode 100644 index 1ce58a2..0000000 --- a/_posts/2014-03-16-pipesexercisesesbabykav +++ /dev/null @@ -1,69 +0,0 @@ ---- -layout: post -author: ethan -date: 2014-03-16 -title: Pipes and Stuffs ---- - -##Process -So doing these exercises I had to do them a couple of times to fully understand what they were doing. I also had to re-read some of the things to really understand what the heck was going on too. One thing that I got stuck on was sorting. Whenever I would sort and then try to get the head, it would always give me something that I wasn't expecting. Then after re-reading and trying to do EVERYTHING over again at least twice, I figured out I was forgetting to add the -n to make it sort numerically. Sorting it numerically gave me the right output. - -##Number 1 -sorting with -n sorts it numerically. - -##Number 2 -they both count the number of lines. The difference is the "<". The "<" tells it to read from a file instead from standard input. The first one tells it to read from the file and the second one tells it to read from the standard input and then give the output of line numbers. - -##Number 3 -I think that it only removes adjacent duplicates for a couple of different reasons. -1) It only removes them becasue people might have duplicates for a reason. Sometimes adjacent entries could mean that one of the entries was an accident. So removing entries that are next to each other removes one of the mistakes. Othe entries that aren't beside each other could mean that they are there for a reason. -2) It could just take too long for it to try to find other duplicates that aren't adjacent. There is a ton of data in these things and it could take too much time and energy to find repititions that aren't right beside each other. -3) Sometimes the system can make copies on accident and the same file just shows up. It doesn't need to be there that many times and can be deleted to save space. -4) To maybe get it to where it can find all of the duplicates you can compine it with this code: "sort salmon.txt | uniq". doing this will reorder and sort the list to put all of the duplicates so that it can actually find all of the duplicates that are right next to each other. - -##Number 4 -Okay so this took a little thinking, but I think I've got it. - -We started with this file: animals.txt - -``` -2012-11-05,deer -2012-11-05,rabbit -2012-11-05,raccoon -2012-11-06,rabbit -2012-11-06,deer -2012-11-06,fox -2012-11-07,rabbit -2012-11-07,bear -``` -head -5 gives us the first 5 lines of animals.txt - -``` -2012-11-05,deer -2012-11-05,rabbit -2012-11-05,raccoon -2012-11-06,rabbit -2012-11-06,deer -``` - -Then tail -3 tells us to get the last three lines of that - -``` -2012-11-05,raccoon -2012-11-06,rabbit -2012-11-06,deer -``` - -then sort -r tells us to reverse that order - -``` -2012-11-06,deer -2012-11-06,rabbit -2012-11-05,raccoon -``` - -and then that saves in the file final.txt - -##Number 5 -cut -d , -f 2 animals.txt | sort | uniq -this would sort it and then it would remove any duplicates. \ No newline at end of file From 50964abc6627f2b8d77e93c29b926b74620500b9 Mon Sep 17 00:00:00 2001 From: babykav Date: Sat, 29 Mar 2014 01:27:42 +0000 Subject: [PATCH 4/5] Created my tweet post. I think I acutally did it right on it's own branch --- _posts/2014-03-28-ethanstweethaiku.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100644 _posts/2014-03-28-ethanstweethaiku.md diff --git a/_posts/2014-03-28-ethanstweethaiku.md b/_posts/2014-03-28-ethanstweethaiku.md new file mode 100644 index 0000000..ae30eb7 --- /dev/null +++ b/_posts/2014-03-28-ethanstweethaiku.md @@ -0,0 +1,14 @@ +--- +layout: post +author: ethan +title: Ethan's Tweet +date: 2014-03-28 +--- + +Here is my tweet that I did!!! +Doing work on a Friday night can be soooo much fun + + + + +babykav out \ No newline at end of file From 68910e50dc96e8453c62e60c15331ab4c4c57598 Mon Sep 17 00:00:00 2001 From: Ethan Kavanaugh Date: Wed, 2 Apr 2014 11:37:17 -0400 Subject: [PATCH 5/5] deleted ' in title --- _posts/2014-03-28-ethanstweethaiku.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_posts/2014-03-28-ethanstweethaiku.md b/_posts/2014-03-28-ethanstweethaiku.md index ae30eb7..5a9e1cd 100644 --- a/_posts/2014-03-28-ethanstweethaiku.md +++ b/_posts/2014-03-28-ethanstweethaiku.md @@ -1,7 +1,7 @@ --- layout: post author: ethan -title: Ethan's Tweet +title: Ethans Tweet date: 2014-03-28 --- @@ -11,4 +11,4 @@ Doing work on a Friday night can be soooo much fun -babykav out \ No newline at end of file +babykav out