I have a spread sheet with 2 columns that I have applied conditional formatting to to find duplicate values. So the duplicate cells are highlighted in red now.
Problem - I want to copy only those red cells to a new column in a new tab, and am having a devil of a time figuring out. Best case would be some sort of formula I could use on the new tab&column to do this.
I'm using excel 2010 and have inherited an old workbook that has seen many version updates over the years. The "view macros" list displays approx 25 macros and I know that not all are currently being used or necessary.
The main tab contains macro control buttons. Any macro not assigned to one of these controls is not necessary (it's probably old and was just never removed).
How can I determine which macros are "unassigned" to a control or otherwise invalid?
I have 2 lists (excel 2010) that I need to compare (they are currently 2 files, but I can combine them into 1 file with 2 worksheets) if it would be better.
The first list is of "All Students" at our college. The second list is those students who live "ON campus". I need a list of those students who live OFF campus.
The common denominator headers in both lists is: A1 Last Name A2 First Name
I would like to keep the "All Students" list as my master as it contains all the data I need such as addresses.
Ideally, I would like to create a macro or lookup or whatever that will take everyone from the "ON" list and remove them from the "ALL" list, leaving me with the data I need.
I'm new to VBA and macros, using Excel 2010, and am trying to figure out how to delete all duplicate rows in a sheet where 2 or less of their values in column A is "1". I'd like have a script that is flexible enough to change to 3 or less if need be. I also have a header row that needs to be offset in the process.
A---B- 0--123 <-delete 0--123 <-delete 0--123 <-delete 1--123 <-delete based on this the value of column A 0--123 <-delete 0--123 <-delete 1--321 1--321 1--321 1--321 1--321
or
A---B- 0--123 <-delete 0--123 <-delete 1--123 <-delete 1--123 <-delete based on this the value of column A 0--123 <-delete 0--123 <-delete 1--321 1--321 1--321 1--321 1--321
Upon opening after "Enable" is selected the workbook attempts to locate several nonexistent pieces of data, either internet based files or network based files. Requested data appears to be about 11 years old and would not be applicable it located.
Edit Links shows the location of the requested files, i.e., E:filename but does not show the location within the document that causes this request. A search for "E:" does not locate text in any worksheets.
The question is how to delete or turn off this problem which slows opening, saving, and recalculation of a large multiple worksheet workbook.
My Excel program (Excel 2010) currently has several columns and each column looks for and pulls data from a specific file on my computer. Then I need to delete any duplicate data entries, count the number of unique entries and track the changes through a chart. I have everything done except I cannot figure out (or find on the internet) a way to search in multiple columns (more than 2) and delete just the duplicate cells. I want to delete the cells in a way where there is one left. For example if the code 12gf is duplicated three time, I want to be left with one 12gf (it doesnt matter what column the original one is left in). Additionally, column length changes and they are not sorted. I have attempted to attach an image of an example file below.
I am having trouble creating a function to count duplicates of duplicates.
An example of the data table 1 is:
Product 1 2nd Product 1 2nd Product 1 New Product 1 New Product 1 Flt Product 2 2nd Product 2 New Product 2 New Product 2 Flt Product 2 Flt Product 3 2nd Product 3 2nd Product 3 2nd Product 3 New Product 3 Flt
I created a new table (table 2) and made a list of all the Products on table 1 and removed the duplicates. I now have 3 columns with titles New, 2nd and Flt as follows:
New 2nd Flt Product 1 XX XX XX Product 2 XX XX XX Product 3 XX XX XX
I am trying to count the duplicates for each product (XX), but I can't seem to work it out. I've tried the MS help function, but unsure of the actual formula I need to be using.
1 workbook, 2 worksheets (or tabs). On tab 1, I want a formula/alert that tells the user if any duplicate values exist in Column A of tab 2
Tab 2, Column A, has Unique ID's (6 digit numeric values)
The user manually inputs the ID's on new rows in Column A
Row 1 is reserved and in use for something else Row 2 is my header, so cell A2 says "ID" Row 3-623 currently contain unique ID's
When the user inputs a new ID into cell A624, then they return to Tab 1, I want my formula/alert on Tab 1 to tell the user that they have duplicates in Column A of tab 2. I know the Conditional Formatting, but if the user copies in 100 new values, they won't necessarily see the highlighted cells. My tab 1 is my "checks and balances" and the last place the user is suppposed to look to ensure that they haven't created any duplicate ID's. If the user sees a warning message that says duplicates exist, then I'll tell them that they need to look at column A (for cells that have been conditionally highlighted).
One issue that I'm running into with the conditional highlighting is that I want cells A3:A1048576 to already have the conditional formatting - this way when the user inserts a value into Cell A624, then A625, etc they conditional formatting is already there. Right now with data in cells A3:A623, cells A624:A1048576 are all highlighted with the Red/Bold Red Font (which is okay I guess), but ideally it would be nice to not count 2+ empty cells as duplicates and I'll have to have my formula on Tab 1 not include the blank cells.
I DO NOT want to use the Remove Duplicates feature of Excel 2010. If I remove them I could be removing data in columns B, C, D, etc that belong to the Unique ID. I just need the user to be told in Tab 1 that they DO have duplicates and I'll train the user how to research this and fix it.
The reason I want to look for duplicates in the entire Column A is because the list of Unique ID's will grow over time.
I have a text file which is attached as "rawdata". It contains records of something (let's call it temperature) at different times on different days. My goal is to display a graph of temperature versus time so that I can visually analyze trends. I have hundreds of these files, all of different lengths. it is very important that I automate this process as much as possible.
Detail: (Here I describe what I have done so far; if this is inefficient or unnecessary, feel free to tell me) I open Excel 2010, click File, Open, and select the file that I want to parse. It is a TXT file, so the Text Import Wizard comes up. For step one, I select Fixed Width. I select File Origin: MS-DOS (PC-8). On step 2 of the wizard, I create column break lines to place all dates in the far left column. The next column contains the first column of numbers before the first dash (-). The next column contains only the dash - I will later select "ignore this column" to eliminate them. The next column contains the time stamps. I continue adding column breaks in the wizard until all of the data are parsed into columns in the same manner.
In step 3, I format the first column as "date (DMY)". The columns with the dashes I select "do not import". Everything else is "general". I click "finish", and the resultant workbook is attached, called "import".
Now, as to what I want to do: I want to display the "temperatures" as a graph vs a date/time axis. The reason I find this difficult is because the temperatures and times are not in neat columns, but are in 4 columns that go in a left-to-right and top-to-bottom progression and are broken up every few lines. (I am interested only in numbers that are displayed immediately to the left of a time-stamp. Therefore, the "record #"s should be ignored. We can delete the rows that say "record #" if can be done automatically.)
I have a huge list of accruals and payments. Accruals (positive) are entered, and at a later date are offset by the payments (negative). Im trying to make a schedule so i can determine which are left over.
This is easy to do manually for a small amount of rows. However im dealing with 5000 rows and i do not want to manually match it would take many days to do.
Ive tried a duplicate remover. To get it to work i made an absolute value column for the negatives, and compared it to the positives column to find the duplicates. This works to a point. However, If i have three accruals for 100, and one payment for 100, all are identified as duplicates which obviously is not what im looking for.
I need to get it where one accrual is matched to one payment. if there are 3 accruals, 2 payments, 1 is not a duplicate. if there is 3 accruals, 1 payment, 2 are not duplicates.
I have a list of serial numbers. There are many groups of 8 same serial numbers, where group consists of 2 projects with 4 SN per project (because of 4 different events). I want to create a formula that marks one project with 4 out of 8 SN for deletion based on set of 3 dates assigned to them. In short I need to count 4 rows per project as one unit.
Serial Project Event description Date1 Date2 Date3
Here is a list on what information matters when making decision if to mark project for deletion or not.
1. Project1 has no Dates entered compared to Project2. Mark Pr1
2. Project1 and Project2 has no Dates entered. Mark Pr1 (random, does not matter which should be removed)
3. Project1 has 2009 Dates, Project2 has 2011 Dates. Mark Pr1 because dates are older
4. Project1 has less Date entries filled than Project2 (same year). Mark Pr1 because less Date fields entered.
I can somewhat do it for separate rows, how I can make these rules apply for whole project as one unit related only to one SN at a time. Biggest problem is there is no pattern of dates entered. Sometimes one row can be filled another missing out info and etc.
I have a spreadsheet that has account numbers listed multiple times. I need to eliminate all of the duplicate entries...Is there a formula for this...?
I have a spreadsheet with 2 columns of about 2900 records. about half of the records are duplicates. How can I eliminate the duplicate records? Example ID / ID#
I am attaching a file with an example of a spreadsheet that I am trying to sort out. In this example I have 3 samples (I could have many more). Each sample has 8 columns associated with it (N, M, I, F, S, MS, KM and KD). The length of the dataset is different for each sample. The MS column is the same as M but contains a few zeroes. What I am trying to do is:
1) generate one column (MSA) containing only unique values (no zeroes) from columns MS1, MS2 and MS3. The unique values should be selected within a specified tolerance (for example, 0.001, which makes 52.00706 from MS1 and 52.00701 from MS2 duplicate values although they are not exactly the same)
2) generate 3 columns ( named SS1, SS2 and SS3) with sorted columns S1, S2, and S3 so that for each value of MS in column MSA each of the three columns will list the corresponding value of S1, S2 and S3 (zero if there is no corresponding value)
exclude the duplicate row in the macro. The macro is checking for blank or "NA" in column N and copying the row to a new destination file. It is not repeating the row if either one of the conditions is met. [that is wat I want to do] however if the rows have duplicate data i dont want to copy them.
Sub SRSCheck_Data() Dim Rg_Src As Range Dim LastRow As Long
Column A______Column B_____Column C 100/12__________B___________$ 100/12______________________@ 100/12______________________€ 250/13______________________€ 250/13______________________$
I want to keep in ColumnA all three rows of 100/12, because it has a value in Column B in one cell-which is the criteria, and remove the 250/13 because it has no value in cell B.
I was assuming that merging duplicates in column A, and than remove empty from ColumnB.
I got a formula from this forum to eliminate duplicate records in a array from 1 column in database. Now I would like to take it one step further and filter out records in the array that do not meet the criteria of being in a particular "Zone" selected by the user by clicking on a ComboBox from cell "AA18".
Column A consists of a list of the barcodes I've scanned.
In column A there is sometimes more than one of the same barcode when i have more than one of the same product. is there a way of deleting duplicate barcodes in column a and replace them with a 'Quantity' column?
I need to check, if in A1:BU1 are any duplicate words. All the formulas I found deal with finding duplicates downward (like A1:A1000). Have not seen any formula which works across (from left to right)
Is there an easy way in Excel 2010 either to tag/ and -or remove the duplicate which I could apply and then just copy downward? The formula must work from left to right, because many words repeat downward.
I have a victim of the Index-Match duplication problem in Excel (2010). Basically, I have three columns of data, all daily input for the year.
Column 1 = Date Column 2 = Actual (Units Sold) Column 3 = Scheduled (Units Sold)
The Date is filled out through the end of the year as is the Scheduled values. The Actual values are filled out daily.
I need to generate a summary box that reports Actual, Scheduled, and Variance (Actual - Scheduled) for the time periods Daily, Month to Date, and YTD.
My problem is that when I try to return the Schedule value that corresponds with the date of the last entry, I don't know if I am pulling the correct Schedule value since I do not know if the Actual value (that is pulled from the last value in the Actual column) is unique. So I tried using an Index-Match formula to return the latest value (that is the last record occurrence of the value) to my function in order to retrieve the correct Schedule value, but, sadly, it did not work.
I by no means am an Excel expert like many of you, so I may have some questions along the way.
I've attached a sample extraction from my worksheet and included an example of the Summary panel I'm creating.