Arrays

In Data Structures and Algorithms (DSA), arrays are fundamental data structures used to store a collection of elements of the same type. Arrays offer efficient random access to elements and are widely used in various algorithms and applications due to their simplicity and efficiency.

Here are some key points about arrays in DSA:

  1. Contiguous Memory Allocation: Arrays allocate memory in a contiguous block, meaning that elements are stored one after the other in memory. This allows for efficient access to elements using their indices.
  2. Fixed Size: Arrays have a fixed size, meaning that the number of elements they can hold is predetermined at the time of declaration. Once an array is created, its size typically cannot be changed dynamically.
  3. Zero-based Indexing: In most programming languages, arrays are zero-indexed, meaning that the index of the first element is 0, the index of the second element is 1, and so on.
  4. Random Access: Arrays support constant-time access to elements by index. Given an index, you can directly access the element stored at that index without traversing the entire array.
  5. Operations on Arrays:
    • Accessing: You can access individual elements of an array using their indices.
    • Insertion/Deletion: Inserting or deleting elements in an array can be inefficient if it requires shifting elements.
    • Searching: Searching for an element in an array can be done using techniques like linear search or binary search (for sorted arrays).
    • Sorting: Arrays can be sorted using various sorting algorithms such as bubble sort, insertion sort, merge sort, quicksort, etc.
    • Manipulation: You can modify elements in an array based on your requirements.
  6. Multidimensional Arrays: Arrays can have more than one dimension, creating structures like matrices or tables. Multidimensional arrays can be implemented as arrays of arrays or using other data structures like dynamically allocated arrays of pointers.
  7. Memory Efficiency: Arrays are memory-efficient because they store elements in contiguous memory locations, allowing for efficient memory access and cache utilization.

Algorithm: Find The Lowest Value in an Array

How it works:

  1. Go through the values in the array one by one.
  2. Check if the current value is the lowest so far, and if it is, store it.
  3. After looking at all the values, the stored value will be the lowest of all values in the array.
#include <stdio.h>

int find_lowest_value(int arr[], int size) {
    // Check if the array is empty
    if (size == 0)
        return -1; // Return -1 to indicate that the array is empty

    // Initialize the minimum value with the first element of the array
    int min_value = arr[0];

    // Iterate through the array to find the lowest value
    for (int i = 1; i < size; i++) {
        if (arr[i] < min_value) {
            min_value = arr[i];
        }
    }

    return min_value;
}

int main() {
    int arr[] = {5, 3, 8, 2, 9, 1};
    int size = sizeof(arr) / sizeof(arr[0]);

    // Find the lowest value in the array
    int lowest_value = find_lowest_value(arr, size);

    if (lowest_value == -1) {
        printf("The array is empty.\n");
    } else {
        printf("The lowest value in the array is: %d\n", lowest_value);
    }

    return 0;
}

Output:

The lowest value in the array is: 1

In this C implementation:

  • The find_lowest_value function takes an array arr and its size as parameters.
  • It iterates through the array to find the lowest value.
  • The main function initializes an array and calculates its size using sizeof.
  • It then calls the find_lowest_value function to find the lowest value in the array and prints the result.

This implementation iterates through the array once to find the lowest value. The time complexity of this algorithm is O(n), where n is the number of elements in the array.


The time complexity of an algorithm is a measure of the amount of time it takes to run as a function of the length of the input. It describes how the runtime of an algorithm grows as the size of the input increases.

Algorithm Time Complexity

Time complexity is usually expressed using big O notation, which provides an upper bound on the growth rate of the runtime of the algorithm. Here are some common time complexities and their corresponding growth rates:

  1. O(1) – Constant Time: The runtime of the algorithm does not depend on the size of the input. Example: Accessing an element in an array by index.
  2. O(log n) – Logarithmic Time: The runtime of the algorithm grows logarithmically with the size of the input. Example: Binary search in a sorted array.
  3. O(n) – Linear Time: The runtime of the algorithm grows linearly with the size of the input. Example: Linear search in an unsorted array.
  4. O(n log n) – Linearithmic Time: The runtime of the algorithm grows in proportion to n times the logarithm of n. Example: Quicksort, Merge sort.
  5. O(n^2) – Quadratic Time: The runtime of the algorithm grows quadratically with the size of the input. Example: Selection sort, Bubble sort.
  6. O(n^k) – Polynomial Time: The runtime of the algorithm grows as a polynomial function of the input size. Example: Matrix multiplication.
  7. O(2^n) – Exponential Time: The runtime of the algorithm grows exponentially with the size of the input. Example: Naive recursive Fibonacci algorithm.
  8. O(n!) – Factorial Time: The runtime of the algorithm grows as the factorial of the input size. Example: Brute-force solution for the traveling salesman problem.

    Leave a Reply

    Your email address will not be published.

    Need Help?