Backend Q&A
Date: January 07, 2024 Tags: Backend
Apache Kafka
- What is Apache Kafka?
- Answer: Apache Kafka is a distributed streaming platform that is designed for building real-time data pipelines and streaming applications. It provides a publish-subscribe model and stores streams of records in a fault-tolerant way.
- Explain the key components of Apache Kafka.
- Answer: The key components include:
- Producer: Responsible for publishing messages to Kafka topics.
- Broker: Kafka server that stores and manages the topics.
- Consumer: Retrieves and processes messages from Kafka topics.
- Topic: A category or feed name to which records are published.
- Answer: The key components include:
- What is the role of a Kafka Producer?
- Answer: A Kafka Producer is responsible for publishing messages to Kafka topics. It sends records to a topic by specifying the topic name and the data to be published.
- How is data stored in Kafka?
- Answer: Data in Kafka is stored in topics. Each topic is divided into partitions, and each partition maintains an ordered log of records. Partitions allow for parallelism and scalability.
- Explain Kafka Consumer Groups.
- Answer: Consumer Groups in Kafka allow multiple consumers to work together to consume a topic. Each consumer in a group receives a portion of the messages, enabling parallel processing.
- What is a Kafka Broker?
- Answer: A Kafka Broker is a Kafka server responsible for storing and managing topics. It handles the storage and retrieval of messages, as well as the distribution of messages across partitions.
- How does Kafka ensure fault tolerance?
- Answer: Kafka ensures fault tolerance through data replication. Each partition has multiple replicas, and if one broker fails, another replica can take over. This ensures that data is not lost in case of a broker failure.
- What is a Kafka Topic Partition?
- Answer: A Kafka Topic Partition is a division of a topic’s data. Each partition is an ordered, immutable sequence of records. Partitions allow Kafka to parallelize data storage and consumption.
- Explain the role of ZooKeeper in Apache Kafka.
- Answer: ZooKeeper is used for distributed coordination and management tasks in Kafka. It helps in leader election, broker registration, and storing metadata, ensuring the stability and reliability of the Kafka cluster.
- How does Kafka guarantee message ordering within a partition?
- Answer: Within a partition, Kafka guarantees message ordering based on the order of arrival. Messages are assigned sequential offsets, and consumers read messages in the order of their offsets.
- What is the significance of the Kafka Offset?
- Answer: The Kafka Offset is a unique identifier assigned to each message in a partition. It represents the position of a consumer in a partition, enabling the tracking of consumed and yet-to-be-consumed messages.
- Explain the concept of Kafka Replication Factor.
- Answer: The Replication Factor in Kafka specifies the number of copies of each partition across different brokers. It ensures fault tolerance, allowing the system to continue functioning even if some brokers fail.
- What is the purpose of the Kafka Connect framework?
- Answer: Kafka Connect is a framework for building and running connectors, which are used to import/export data between Kafka and other data sources or sinks (databases, file systems, etc.).
- How does Kafka handle backpressure?
- Answer: Kafka handles backpressure by allowing consumers to control their consumption rate. Consumers can commit their offset only after successfully processing a batch of messages, enabling them to control the pace of consumption.
- Explain the use of Kafka Streams in Apache Kafka.
- Answer: Kafka Streams is a library for building stream processing applications. It enables the creation of applications that can process and analyze data in real-time as it flows through Kafka topics.
- What is the role of the Kafka Producer Acknowledgment?
- Answer: The Producer Acknowledgment in Kafka determines the acknowledgment strategy between the producer and broker. It can be set to “all” (acknowledgment after all replicas receive the message) or “1” (acknowledgment after the leader receives the message).
- How does Kafka handle data retention?
- Answer: Kafka handles data retention through configurable settings on topics. The retention period and size are set, allowing Kafka to automatically delete old messages beyond the defined thresholds.
- Explain the concept of Kafka Partitions Reassignment.
- Answer: Kafka Partitions Reassignment is the process of redistributing partitions among brokers. This is typically done to balance the load or replace a broker that has failed.
- What is the significance of the Kafka Log Compaction feature?
- Answer: Kafka Log Compaction is a feature that retains the latest message for each key, ensuring that the log maintains a compacted state. It is useful for scenarios where only the latest state of each record is important.
- How does Kafka handle message delivery semantics?
- Answer: Kafka provides different message delivery semantics:
- At most once: Messages may be lost but are not redelivered.
- At least once: Messages are guaranteed to be delivered but may be delivered multiple times.
- Exactly once: Messages are guaranteed to be delivered only once.
- Answer: Kafka provides different message delivery semantics:
- What is the purpose of Kafka’s MirrorMaker tool?
- Answer: Kafka’s MirrorMaker tool is used for replicating data between Kafka clusters. It allows for data synchronization and replication across multiple Kafka environments.
- How can you monitor and manage a Kafka cluster?
- Answer: Kafka provides JMX (Java Management Extensions) metrics for monitoring. External tools like Prometheus and Grafana can be used for visualization. Additionally, Kafka comes with command-line tools for administration tasks.
- What is the role of the Kafka Consumer Offset?
- Answer: The Kafka Consumer Offset is a record of the last committed message’s offset for each partition. It allows consumers to resume reading from the last consumed position after a restart.
- Explain the concept of Kafka Log Segments.
- Answer: Kafka Log Segments are the fundamental storage unit for data in Kafka. They represent a continuous, append-only file where messages are written. Log segments are rolled over and archived based on configuration settings.
- How does Kafka handle data serialization and deserialization?
- Answer: Kafka itself is agnostic to the format of the data. It accepts and delivers messages as byte arrays. Data serialization and deserialization are the responsibility of the producer and consumer applications.
- What is the role of the Kafka Controller in a Kafka cluster?
- Answer: The Kafka Controller is responsible for managing the overall state of the Kafka cluster. It handles tasks such as leader election, partition reassignment, and broker registration.
- How does Kafka ensure data durability?
- Answer: Kafka ensures data durability by replicating messages across multiple brokers. Each partition has a configurable replication factor, and messages are not considered committed until they are replicated to the specified number of brokers.
- Explain the concept of Kafka Producers with Idempotence.
- Answer: Kafka Producers with Idempotence ensure that messages sent to Kafka are guaranteed to be exactly once or not at all. It prevents duplicate messages in case of network issues or retries.
- What is the purpose of Kafka’s High-Level Consumer API?
- Answer: The High-Level Consumer API in Kafka simplifies the process of writing consumers. It handles details such as offset tracking, partition rebalancing, and failure recovery.
- How does Kafka handle schema evolution in data streams?
- Answer: Kafka supports schema evolution by allowing producers and consumers to work with different versions of data. It uses a schema registry to manage and evolve the schema over time.
Kafka Handling Challenges
- How do you handle message delivery failures in a Kafka producer?
- Answer: In a Kafka producer, you can handle message delivery failures by implementing proper error handling and retry mechanisms. You may also consider logging failed messages for manual inspection.
- Explain how to handle message duplication in a Kafka consumer.
- Answer: Handling message duplication in a Kafka consumer involves tracking and storing offsets along with unique message identifiers. Deduplication logic can be implemented to ensure each message is processed only once.
- What strategies can be employed to handle an increase in Kafka consumer lag?
- Answer: Strategies for handling an increase in Kafka consumer lag include:
- Adjusting consumer group settings for rebalancing.
- Increasing the number of consumer instances.
- Scaling consumer resources (CPU, memory) to handle the load.
- Answer: Strategies for handling an increase in Kafka consumer lag include:
- How can you handle data skew across Kafka topic partitions?
- Answer: Handling data skew involves proper partitioning of data and distribution of keys across partitions. Monitoring and adjusting partitioning strategies can help prevent uneven data distribution.
- Explain how to handle backward-incompatible schema changes in a Kafka topic.
- Answer: Handling backward-incompatible schema changes involves versioning and supporting multiple schema versions. The use of a schema registry and compatibility checks can help manage schema evolution.
- What steps would you take to handle an increase in Kafka broker traffic or load?
- Answer: To handle an increase in Kafka broker traffic, you can:
- Scale the number of Kafka broker instances.
- Optimize Kafka configurations for resource usage.
- Monitor and adjust partition distribution to balance load.
- Answer: To handle an increase in Kafka broker traffic, you can:
- How would you handle a scenario where Kafka consumers are lagging behind producers significantly?
- Answer: To address significant consumer lag, you might:
- Monitor consumer group lag and identify bottlenecks.
- Tune consumer configurations for performance.
- Scale consumer instances or optimize processing logic.
- Answer: To address significant consumer lag, you might:
- Explain how to handle message order preservation in a Kafka producer with parallelization.
- Answer: To preserve message order in a Kafka producer with parallelization:
- Use a single producer instance per partition.
- Ensure that messages with the same key go to the same partition.
- Implement proper synchronization to maintain order within each partition.
- Answer: To preserve message order in a Kafka producer with parallelization:
- How do you handle the scenario where a Kafka topic experiences high partition turnover?
- Answer: Handling high partition turnover involves:
- Adjusting Kafka’s auto-create topic settings.
- Monitoring partition reassignments and ensuring they don’t impact stability.
- Reviewing and optimizing partition assignment strategies.
- Answer: Handling high partition turnover involves:
- Explain how to handle Kafka broker failures gracefully in a production environment.
- Answer: Handling Kafka broker failures involves:
- Configuring replication for partitions to prevent data loss.
- Monitoring for broker failures and triggering alerts.
- Implementing a process for quick recovery, such as replacing failed brokers.
- Answer: Handling Kafka broker failures involves:
Elasticsearch
- What is Elasticsearch?
- Answer: Elasticsearch is a distributed, open-source search and analytics engine built on top of Apache Lucene. It is designed for horizontal scalability, real-time search, and the ability to handle large volumes of data.
- Explain the term “Inverted Index” in the context of Elasticsearch.
- Answer: In Elasticsearch, an inverted index is a data structure that maps terms to their corresponding document IDs. It speeds up the search process by allowing the engine to quickly locate documents containing a given term.
- How does Elasticsearch achieve horizontal scalability?
- Answer: Elasticsearch achieves horizontal scalability through sharding. It divides an index into multiple shards, and each shard can be hosted on a separate node. This allows Elasticsearch to distribute data and queries across nodes.
- What is a “Node” in Elasticsearch?
- Answer: A node in Elasticsearch is a single instance of the Elasticsearch server that participates in the cluster. Each node stores data and participates in the indexing and search capabilities of the cluster.
- Explain the purpose of an Elasticsearch “Cluster.”
- Answer: An Elasticsearch cluster is a collection of nodes that work together and share the same data. It provides horizontal scalability, fault tolerance, and high availability. Nodes within a cluster communicate and distribute data among themselves.
- How is data stored in Elasticsearch?
- Answer: Data in Elasticsearch is stored in JSON format. Each document is a JSON object, and these documents are stored in index structures. An index is a collection of documents that share the same data structure.
- What is “Indexing” in Elasticsearch?
- Answer: Indexing in Elasticsearch refers to the process of adding or updating documents in an index. During indexing, Elasticsearch stores and organizes the documents in a way that enables efficient search and retrieval.
- Explain the concept of “Mapping” in Elasticsearch.
- Answer: Mapping in Elasticsearch defines the data type and configuration for each field in a document. It helps Elasticsearch understand how to index and search the data within documents.
- What is “Query DSL” in Elasticsearch?
- Answer: Query DSL (Domain-Specific Language) is a powerful query language used in Elasticsearch for defining queries. It allows users to specify the conditions and filters for searching documents in Elasticsearch.
- How does Elasticsearch handle distributed search?
- Answer: Elasticsearch handles distributed search by dividing an index into shards and distributing those shards across nodes in a cluster. When a search is performed, each shard executes the search locally, and the results are merged.
- Explain the purpose of “Analyzer” in Elasticsearch.
- Answer: An analyzer in Elasticsearch is responsible for processing text during indexing and searching. It includes a tokenizer for breaking text into terms and filters for modifying or removing terms. Analyzers help in creating an inverted index.
- What is “Aggregation” in Elasticsearch?
- Answer: Aggregation in Elasticsearch is a powerful feature for data analysis. It allows users to perform calculations on data and obtain summary information, such as averages, sums, and histograms, in real-time.
- How does Elasticsearch ensure data reliability and availability?
- Answer: Elasticsearch ensures data reliability and availability through data replication. Each shard has one or more replicas, distributed across different nodes. This ensures that if a node or shard fails, data is still available from replicas.
- Explain the role of “Refresh” in Elasticsearch.
- Answer: The “Refresh” operation in Elasticsearch makes recently indexed documents available for search. By default, Elasticsearch refreshes its indices every second, but it can be configured to a different interval.
- What is “Document Routing” in Elasticsearch?
- Answer: Document routing is a technique in Elasticsearch where documents with the same routing value are stored on the same shard. It can be used to optimize queries that target a specific subset of data.
- Explain the concept of “Relevance Score” in Elasticsearch.
- Answer: Relevance Score in Elasticsearch measures how well a document matches a query. It is calculated using TF-IDF (Term Frequency-Inverse Document Frequency) and other factors to determine the relevance of documents to a search query.
- What is the purpose of the “Filter” in Elasticsearch queries?
- Answer: The “Filter” in Elasticsearch is used to narrow down the results of a query by specifying criteria that must be met. Unlike “Query,” filters do not affect the relevance score.
- How can you optimize the performance of Elasticsearch queries?
- Answer: To optimize Elasticsearch queries, you can:
- Use document routing to target specific shards.
- Utilize appropriate analyzers and mappings.
- Use filters for non-scoring criteria.
- Monitor and adjust the number of shards based on cluster size.
- Answer: To optimize Elasticsearch queries, you can:
- What is the role of “Shard Allocation Awareness” in Elasticsearch?
- Answer: Shard Allocation Awareness is a feature in Elasticsearch that helps prevent data loss in case of hardware failures. It ensures that primary and replica shards are distributed across different racks or zones for fault tolerance.
- How does Elasticsearch handle full-text search?
- Answer: Elasticsearch handles full-text search through the use of analyzers, which tokenize and index text data. It supports powerful features like stemming, synonym expansion, and relevance scoring for effective text search.
- Explain the difference between “Must,” “Should,” and “Must Not” clauses in Elasticsearch queries.
- Answer: In Elasticsearch queries:
- “Must” clauses are required to match for a document to be considered.
- “Should” clauses are optional, and their matching contributes to the relevance score.
- “Must Not” clauses exclude documents that match the specified criteria.
- Answer: In Elasticsearch queries:
- What is the purpose of “Aliases” in Elasticsearch?
- Answer: Aliases in Elasticsearch provide a way to associate multiple indices with a single, user-friendly name. They are often used for managing index versions, facilitating index swapping, or simplifying data querying.
- How does Elasticsearch handle distributed join operations?
- Answer: Elasticsearch handles distributed join operations through the use of “parent” and “child” relationships between documents. Join queries allow for querying parent and child documents efficiently.
- What is the “Term Vectors” feature in Elasticsearch used for?
- Answer: The “Term Vectors” feature in Elasticsearch provides detailed information about the terms within a document. It can be used for understanding how terms are analyzed and for optimizing search queries.
- How does Elasticsearch handle field data?
- Answer: Elasticsearch stores field data in a structure called “doc values” for efficient sorting and aggregations. It avoids loading field data into memory, preventing memory-related issues.
- Explain the role of “Snapshot and Restore” in Elasticsearch.
- Answer: Snapshot and Restore is a feature in Elasticsearch for creating backups of entire clusters or specific indices. It allows for restoring data in case of data loss or cluster failure.
- What is the purpose of the “Score Function” in Elasticsearch queries?
- Answer: The Score Function in Elasticsearch allows users to influence the relevance score of documents based on custom criteria. It is often used to boost or penalize certain documents in search results.
- Explain the significance of the “_source” field in Elasticsearch.
- Answer: The “_source” field in Elasticsearch stores the original JSON document that was indexed. It can be retrieved and displayed along with search results.
- How does Elasticsearch handle conflicts in distributed indexing scenarios?
- Answer: Elasticsearch uses a versioning system to handle conflicts in distributed indexing. If two updates conflict, the versioning ensures that the update with the highest version is applied.
- What is the purpose of the “Transform” feature in Elasticsearch?
- Answer: The “Transform” feature in Elasticsearch is used for creating summarized or aggregated views of data. It allows users to transform and reshape data for analytics and reporting purposes.
Hands-On Elasticsearch
- How do you index a document in Elasticsearch using the RESTful API?
- Answer: To index a document, you can use the
PUT
orPOST
HTTP method along with the_index/_type/_id
endpoint. For example:curl -X POST "localhost:9200/index_name/doc_type/doc_id" -H 'Content-Type: application/json' -d' { "field1": "value1", "field2": "value2" } '
- Answer: To index a document, you can use the
- Write a query to retrieve all documents from an index in Elasticsearch.
- Answer: Use the
GET
method with the index name in the endpoint. For example:curl -X GET "localhost:9200/index_name/_search"
- Answer: Use the
- How can you filter documents based on a specific field value in Elasticsearch?
- Answer: Use the
term
query. For example, to filter documents where the field “field1” has the value “value1”:curl -X GET "localhost:9200/index_name/_search" -H 'Content-Type: application/json' -d' { "query": { "term": { "field1": "value1" } } } '
- Answer: Use the
- Write a query to perform a full-text search in Elasticsearch.
- Answer: Use the
match
query for a full-text search. For example, to search for documents containing the term “search_term”:curl -X GET "localhost:9200/index_name/_search" -H 'Content-Type: application/json' -d' { "query": { "match": { "field1": "search_term" } } } '
- Answer: Use the
- How do you perform an aggregation to get the average value of a numeric field in Elasticsearch?
- Answer: Use the
avg
aggregation. For example, to calculate the average of “numeric_field”:curl -X GET "localhost:9200/index_name/_search" -H 'Content-Type: application/json' -d' { "aggs": { "average_value": { "avg": { "field": "numeric_field" } } } } '
- Answer: Use the
- Write a query to sort the search results based on a specific field in Elasticsearch.
- Answer: Use the
sort
parameter. For example, to sort the results based on “field1” in ascending order:curl -X GET "localhost:9200/index_name/_search" -H 'Content-Type: application/json' -d' { "sort": [ { "field1": { "order": "asc" } } ] } '
- Answer: Use the
- How can you update the value of a field in an existing document in Elasticsearch?
- Answer: Use the
POST
method with the_update
endpoint. For example, to update “field1” to “new_value” for a specific document:curl -X POST "localhost:9200/index_name/doc_type/doc_id/_update" -H 'Content-Type: application/json' -d' { "doc": { "field1": "new_value" } } '
- Answer: Use the
- Write a query to perform a fuzzy search for a term in Elasticsearch.
- Answer: Use the
fuzzy
query. For example, to perform a fuzzy search for “fuzzy_term”:curl -X GET "localhost:9200/index_name/_search" -H 'Content-Type: application/json' -d' { "query": { "fuzzy": { "field1": { "value": "fuzzy_term" } } } } '
- Answer: Use the
- How do you create an index template in Elasticsearch?
- Answer: Use the
PUT
method with the_index_template
endpoint. For example:curl -X PUT "localhost:9200/_index_template/template_name" -H 'Content-Type: application/json' -d' { "index_patterns": ["pattern*"], "template": { "settings": { "number_of_shards": 1 } } } '
- Answer: Use the
- Write a query to filter documents based on a date range in Elasticsearch.
- Answer: Use the
range
query for date filtering. For example, to filter documents with a date between “start_date” and “end_date”:curl -X GET "localhost:9200/index_name/_search" -H 'Content-Type: application/json' -d' { "query": { "range": { "date_field": { "gte": "start_date", "lte": "end_date" } } } } '
- Answer: Use the
Elasticsearch Handling Challenges
- How do you handle Elasticsearch cluster failures, and what steps would you take for recovery?
- Answer: In case of Elasticsearch cluster failures, the following steps can be taken:
- Identify the cause of the failure using cluster health and node statistics.
- Address hardware or network issues.
- Perform shard reallocation if needed.
- Restore data from snapshots if necessary.
- Answer: In case of Elasticsearch cluster failures, the following steps can be taken:
- Explain the approach you would take to optimize Elasticsearch index performance.
- Answer: To optimize Elasticsearch index performance:
- Choose appropriate mappings and analyzers.
- Monitor and adjust the number of shards based on data volume.
- Optimize queries using filters and aggregations.
- Utilize index aliases for efficient data management.
- Answer: To optimize Elasticsearch index performance:
- How do you handle a situation where Elasticsearch queries are slow to respond?
- Answer: To address slow Elasticsearch queries:
- Optimize the queries using appropriate filters and aggregations.
- Monitor and tune cluster resources such as CPU, memory, and disk I/O.
- Consider index optimization and shard management.
- Answer: To address slow Elasticsearch queries:
- Explain the strategy you would use to handle Elasticsearch data backups and restores.
- Answer: For handling Elasticsearch data backups and restores:
- Regularly schedule snapshots using the Snapshot and Restore feature.
- Store snapshots in a reliable backup repository.
- Test the restore process periodically to ensure data recoverability.
- Answer: For handling Elasticsearch data backups and restores:
- How would you handle Elasticsearch index versioning for rolling updates or schema changes?
- Answer: To handle Elasticsearch index versioning:
- Use index aliases to reference the current version of the index.
- Implement a strategy for reindexing or creating new indices during schema changes.
- Ensure that applications reference the correct alias for queries.
- Answer: To handle Elasticsearch index versioning:
- Explain the steps you would take to handle Elasticsearch security concerns.
- Answer: For handling Elasticsearch security concerns:
- Implement authentication and authorization using built-in or external systems.
- Enable encryption using Transport Layer Security (TLS).
- Regularly audit and monitor access logs for suspicious activities.
- Answer: For handling Elasticsearch security concerns:
- How do you handle the scenario of a sudden increase in indexing traffic and the potential impact on query performance?
- Answer: To handle a sudden increase in indexing traffic:
- Monitor indexing and query performance in real-time.
- Consider scaling the cluster horizontally to handle increased indexing load.
- Optimize queries to reduce the impact on query performance.
- Answer: To handle a sudden increase in indexing traffic:
- Explain the strategy you would use for handling Elasticsearch index rollover to manage large volumes of time-series data.
- Answer: For handling index rollover in time-series data:
- Set up an index template for creating new indices.
- Define conditions for index rollover based on size or time.
- Use aliases to reference the latest indices for querying.
- Answer: For handling index rollover in time-series data:
- How do you handle the scenario of data inconsistencies across Elasticsearch nodes?
- Answer: To handle data inconsistencies across Elasticsearch nodes:
- Investigate network issues or hardware failures.
- Utilize shard allocation awareness to distribute shards across different racks or zones.
- If necessary, perform manual shard reallocation.
- Answer: To handle data inconsistencies across Elasticsearch nodes:
- Explain your approach to handling version conflicts during distributed updates in Elasticsearch.
- Answer: To handle version conflicts during distributed updates:
- Use the optimistic concurrency control provided by Elasticsearch.
- Handle conflicts by updating documents with higher version numbers.
- Implement retry mechanisms in case of conflicts.
- Answer: To handle version conflicts during distributed updates:
Redis Interview
- What is Redis?
- Answer: Redis is an open-source, in-memory data structure store that can be used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and more.
- Explain the key features of Redis.
- Answer: Key features of Redis include:
- In-memory storage for fast data access.
- Support for various data structures.
- Persistence options for data durability.
- High availability through replication.
- Built-in partitioning for horizontal scaling.
- Support for transactions and Lua scripting.
- Answer: Key features of Redis include:
- How does Redis differ from traditional relational databases?
- Answer: Redis differs from traditional relational databases by being an in-memory data store with a focus on high-performance read and write operations. It lacks the structured schema and complex querying capabilities of relational databases.
- Explain the various data types supported by Redis.
- Answer: Redis supports data types such as strings, hashes, lists, sets, sorted sets (Zsets), bitmaps, hyperloglogs, and geospatial indexes. Each data type has specific operations for manipulation and retrieval.
- What is Redis PUB/SUB and how does it work?
- Answer: Redis PUB/SUB is a publish/subscribe messaging paradigm. Publishers send messages to channels, and subscribers receive messages from channels they have subscribed to. It enables real-time message broadcasting and event-driven communication.
- How does Redis handle data persistence, and what are the available options?
- Answer: Redis provides two options for data persistence:
- Snapshotting (RDB): Periodically saves a snapshot of the dataset to disk.
- Append-only file (AOF): Logs every write operation, allowing for reconstruction of the dataset.
- Both options can be used together for enhanced durability.
- Answer: Redis provides two options for data persistence:
- Explain the concept of Redis pipelining.
- Answer: Redis pipelining is a mechanism where multiple commands are sent to the server in a single batch, and the responses are collected in one go. This reduces the overhead of round-trip latency, improving overall performance.
- What is Redis replication, and how does it ensure high availability?
- Answer: Redis replication involves creating a copy of a Redis instance (master) to one or more replica instances. The replicas replicate the data changes from the master, providing data redundancy and allowing for failover in case the master goes down.
- How can you handle transactions in Redis?
- Answer: Redis supports transactions using the MULTI, EXEC, DISCARD, and WATCH commands. Commands enclosed between MULTI and EXEC are executed atomically, ensuring that either all or none of the commands are applied.
- Explain the purpose of Redis Lua scripting.
- Answer: Redis Lua scripting allows users to execute custom scripts on the server side. Scripts can be sent to the server, stored, and executed atomically, enabling complex operations that can’t be achieved with individual Redis commands.
- What is the purpose of the Redis SET data type, and how is it different from a String?
- Answer: The SET data type in Redis is an unordered collection of unique elements. It is different from a String in that it allows for multiple values (elements) and provides set-specific operations like union, intersection, and difference.
- How does Redis handle expiration of keys, and what is the significance of TTL?
- Answer: Redis handles key expiration using Time-To-Live (TTL). When a key is set with a TTL, it will be automatically removed from the database when the specified time elapses. It is useful for implementing cache expiration.
- Explain the use case for Redis HyperLogLog data structure.
- Answer: HyperLogLog is used for approximating the cardinality (count of unique items) of a set. It provides an efficient way to estimate the number of unique elements in a large dataset with a low memory footprint.
- What is Redis Cluster, and how does it provide horizontal scalability?
- Answer: Redis Cluster is a distributed implementation of Redis that provides horizontal scalability. It partitions data across multiple nodes and ensures high availability by supporting master-replica relationships and automatic failover.
- How can Redis be used as a caching mechanism, and what are the advantages?
- Answer: Redis can be used as a caching mechanism by storing frequently accessed data in memory. Advantages include high performance due to in-memory storage, support for various data structures, and the ability to set expiration times for cache items.
- Explain the role of the Redis
ZSET
(Sorted Set) data type and its use cases.- Answer: The
ZSET
(Sorted Set) data type in Redis is an ordered collection of unique elements, each associated with a score. It allows for efficient range queries, ranking, and retrieving a subset of elements based on their scores. Use cases include leaderboards, ranking systems, and real-time analytics.
- Answer: The
- What is the purpose of Redis Lua scripting’s
EVAL
andEVALSHA
commands?- Answer: The
EVAL
andEVALSHA
commands in Redis Lua scripting are used to execute Lua scripts on the server.EVAL
sends the script along with its source code, whileEVALSHA
sends the SHA1 hash of the script, which is useful for caching frequently used scripts and reducing network traffic.
- Answer: The
- How does Redis handle memory management and eviction policies?
- Answer: Redis employs a LRU (Least Recently Used) algorithm to manage memory. When the maximum memory limit is reached, Redis uses eviction policies to remove keys. Common eviction policies include
volatile-lru
(evict least recently used keys with an expiration set),allkeys-lru
(evict any key), and more.
- Answer: Redis employs a LRU (Least Recently Used) algorithm to manage memory. When the maximum memory limit is reached, Redis uses eviction policies to remove keys. Common eviction policies include
- Explain the use of Redis Streams and their applications.
- Answer: Redis Streams are a log-like data structure that allows publishing and consuming messages in a stream. They are suitable for building message-oriented systems, event sourcing, and log processing applications. Streams provide unique IDs for messages, support consumer groups, and enable real-time data processing.
- What is the significance of the Redis
BITSET
data type, and how can it be used?- Answer: The
BITSET
data type in Redis is a compact representation of bit arrays. It can be used for efficient storage and manipulation of bit-level information. Use cases include implementing bloom filters, counting unique elements, and representing state information in a memory-efficient way.
- Answer: The
- Explain the purpose of the Redis
GEO
data type and its applications.- Answer: The
GEO
data type in Redis is used for geospatial indexing. It stores geographical coordinates (latitude and longitude) along with associated members. Applications include location-based services, proximity searches, and mapping.
- Answer: The
- How can you ensure data consistency and atomic operations in Redis?
- Answer: Redis provides atomic operations on single key-value pairs. For ensuring data consistency across multiple operations, transactions can be used. The
MULTI
,EXEC
, andDISCARD
commands enable the execution of a series of commands as a single atomic operation.
- Answer: Redis provides atomic operations on single key-value pairs. For ensuring data consistency across multiple operations, transactions can be used. The
- What is the purpose of Redis Sentinel, and how does it contribute to high availability?
- Answer: Redis Sentinel is a high-availability solution for Redis. It monitors Redis instances, detects failures, and performs automatic failover by promoting a replica to master. Sentinel also provides information about the health and status of the Redis instances.
- Explain the role of Redis Keyspace Notifications and their applications.
- Answer: Redis Keyspace Notifications allow clients to subscribe to notifications about key-space events (e.g., key creation, modification, deletion). They are useful for building real-time applications, cache invalidation, and tracking changes in the dataset.
- How can you secure a Redis instance?
- Answer: To secure a Redis instance:
- Set a strong password using the
requirepass
configuration. - Bind Redis to specific network interfaces.
- Disable unnecessary commands in the
redis.conf
file. - Implement network security measures, such as firewalls.
- Set a strong password using the
- Answer: To secure a Redis instance:
- Explain the use of the Redis
MIGRATE
command and scenarios where it is applicable.- Answer: The
MIGRATE
command in Redis is used to move a key from one Redis instance to another. It is applicable in scenarios where data needs to be transferred between Redis instances, such as redistributing data in a cluster or migrating data to a new server.
- Answer: The
- What is the purpose of Redis RDB snapshots, and how can they be triggered?
- Answer: Redis RDB snapshots are used to persist the entire dataset to disk for backup and recovery purposes. Snapshots can be triggered manually using the
SAVE
command or automatically based on a configurable time interval using thesave
configuration directive inredis.conf
.
- Answer: Redis RDB snapshots are used to persist the entire dataset to disk for backup and recovery purposes. Snapshots can be triggered manually using the
- How does Redis handle cache eviction in the absence of a configured maximum memory limit?
- Answer: If no maximum memory limit is configured, Redis can grow indefinitely. However, in the absence of a configured limit, it’s important to monitor memory usage, and manual eviction or cleanup processes may be necessary to avoid running out of system resources.
- Explain the use case for Redis HyperLogLog data structure and its advantages.
- Answer: Redis HyperLogLog is used for approximating the cardinality (count of unique items) of a set. It provides a low memory footprint, making it efficient for counting unique elements in large datasets. Advantages include constant memory usage and accuracy in estimating cardinality.
- How can Redis be utilized as a message broker, and what features does it offer for this purpose?
- Answer: Redis can be used as a message broker by leveraging the PUB/SUB functionality. Publishers send messages to channels, and subscribers receive messages from channels they are interested in
Hands-On Redis
- Write a Redis command to set the value of a key.
- Answer:
SET key_name value
- Answer:
- How would you retrieve the value of a key in Redis?
- Answer:
GET key_name
- Answer:
- Explain how to increment the value of a numeric key in Redis.
- Answer: To increment the value of a numeric key:
INCR key_name
- Answer: To increment the value of a numeric key:
- Write a Redis command to add multiple elements to a list.
- Answer:
RPUSH key_name element1 element2 element3
- Answer:
- How can you retrieve all elements from a list in Redis?
- Answer:
LRANGE key_name 0 -1
- Answer:
- Write a command to store a hash with multiple key-value pairs in Redis.
- Answer:
HMSET hash_key field1 value1 field2 value2 field3 value3
- Answer:
- Explain how to retrieve all fields and values from a hash in Redis.
- Answer:
HGETALL hash_key
- Answer:
- Write a Redis command to add elements to a sorted set with scores.
- Answer:
ZADD sorted_set_key score1 member1 score2 member2
- Answer:
- How can you retrieve elements from a sorted set within a specific score range?
- Answer:
ZRANGEBYSCORE sorted_set_key min_score max_score
- Answer:
- Write a command to publish a message to a Redis channel.
- Answer:
PUBLISH channel_name message
- Answer:
- How do you subscribe to a Redis channel to receive messages?
- Answer:
SUBSCRIBE channel_name
- Answer:
- Write a command to create a Redis transaction and execute multiple commands atomically.
- Answer:
MULTI SET key1 value1 SET key2 value2 EXEC
- Answer:
- How can you set a Time-To-Live (TTL) for a key in Redis?
- Answer:
EXPIRE key_name seconds
- Answer:
- Write a Redis command to check if a key exists.
- Answer:
EXISTS key_name
- Answer:
- Explain the use of the
BITOP
command in Redis.- Answer: The
BITOP
command in Redis performs bitwise operations on strings. For example,BITOP AND dest_key key1 key2
performs a bitwise AND operation on the binary data of key1 and key2, storing the result in dest_key.
- Answer: The
- How can you retrieve the cardinality of a HyperLogLog in Redis?
- Answer:
PFCOUNT hyperloglog_key
- Answer:
- Write a command to move a key from one database to another in Redis.
- Answer:
MOVE key_name destination_database
- Answer:
- Explain the purpose of the
FLUSHDB
andFLUSHALL
commands in Redis.- Answer:
FLUSHDB
removes all keys from the current database, whileFLUSHALL
removes keys from all databases in the Redis instance.
- Answer:
- Write a command to increment the score of a member in a sorted set in Redis.
- Answer:
ZINCRBY sorted_set_key increment member
- Answer:
- How can you configure a Redis instance to require authentication?
- Answer: Set the
requirepass
configuration directive in theredis.conf
file or use theCONFIG SET
command to set the password dynamically.
- Answer: Set the
Kubernetes (K8s)
- What is Kubernetes?
- Answer: Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications.
- Explain the key components of Kubernetes architecture.
- Answer: Key components include:
- Master Node: Controls and manages the cluster.
- Node/Minion: Worker machines where containers are deployed.
- Pod: Smallest deployable unit, representing a single instance of a running process.
- Controller: Manages the desired state of pods and other objects.
- Service: Exposes pods and provides a stable endpoint.
- Answer: Key components include:
- What is a Pod in Kubernetes?
- Answer: A Pod is the smallest deployable unit in Kubernetes, representing a single instance of a running process. It can contain one or more containers that share network and storage, and it represents the basic building block of deployment.
- Explain the difference between a Deployment and a StatefulSet in Kubernetes.
- Answer: Deployments are suitable for stateless applications with no unique identities, while StatefulSets are designed for stateful applications with unique identities and stable network identities.
- How does Kubernetes achieve high availability for applications?
- Answer: Kubernetes achieves high availability through multiple replicas of applications, automatic scaling, and self-healing mechanisms. Components like Replication Controllers, Deployments, and StatefulSets ensure that the desired number of replicas are running, and failures trigger automatic recovery.
- What is the role of a Service in Kubernetes, and how does it work?
- Answer: A Service in Kubernetes provides a stable endpoint for accessing a set of pods. It enables load balancing, service discovery, and allows communication between different parts of an application.
- Explain the concept of Ingress in Kubernetes.
- Answer: Ingress is a Kubernetes resource that manages external access to services within a cluster. It acts as an API gateway, providing HTTP and HTTPS routing to different services based on rules.
- What is the purpose of ConfigMaps and Secrets in Kubernetes?
- Answer: ConfigMaps are used to inject configuration data into containers, while Secrets are used for sensitive data like passwords or API keys. Both ConfigMaps and Secrets can be mounted into pods as volumes.
- How does Rolling Deployment work in Kubernetes?
- Answer: In a Rolling Deployment, new pods are gradually deployed while old ones are terminated, ensuring continuous availability. This strategy helps in updating applications without downtime.
- What is a DaemonSet in Kubernetes?
- Answer: A DaemonSet ensures that all (or some) nodes run a copy of a pod. It is often used for cluster-level tasks like monitoring, logging, or networking.
- Explain Horizontal Pod Autoscaling in Kubernetes.
- Answer: Horizontal Pod Autoscaling automatically adjusts the number of pods in a deployment based on observed CPU or memory utilization, ensuring optimal resource usage.
- How do you monitor and troubleshoot Kubernetes clusters?
- Answer: Monitoring can be done using tools like Prometheus, Grafana, and Kubernetes Dashboard. Troubleshooting involves checking logs, examining pod status, and using tools like
kubectl exec
for debugging.
- Answer: Monitoring can be done using tools like Prometheus, Grafana, and Kubernetes Dashboard. Troubleshooting involves checking logs, examining pod status, and using tools like
- What is the purpose of a Kubernetes ConfigMap?
- Answer: ConfigMaps are used to decouple configuration artifacts from the container image, allowing configuration changes without modifying the image. They store key-value pairs and can be mounted into pods as volumes.
- How does Kubernetes handle storage orchestration?
- Answer: Kubernetes provides persistent volumes (PVs) and persistent volume claims (PVCs) for storage orchestration. PVs abstract the physical storage, and PVCs request specific storage resources. Storage Classes define the characteristics of the underlying storage.
- Explain the concept of a Pod Lifecycle in Kubernetes.
- Answer: The Pod Lifecycle includes phases such as Pending, Running, Succeeded, Failed, and Unknown. Pods move through these phases as they are scheduled, started, and eventually terminated.
- What is a Kubernetes Operator, and how does it enhance application management?
- Answer: A Kubernetes Operator is a method of packaging, deploying, and managing applications in Kubernetes using custom resources. It extends the capabilities of Kubernetes to automate complex, stateful applications.
- Explain the role of Helm in Kubernetes deployments.
- Answer: Helm is a package manager for Kubernetes that simplifies the deployment and management of applications. It uses charts (packages of pre-configured Kubernetes resources) to define, install, and upgrade even the most complex Kubernetes applications.
- How do you secure communication between different components in a Kubernetes cluster?
- Answer: Communication is secured using TLS (Transport Layer Security). Kubernetes allows you to enable TLS for communication between components by configuring the necessary certificates and keys.
- What is the purpose of the
kubectl
command, and how is it used in Kubernetes?- Answer:
kubectl
is the command-line tool for interacting with Kubernetes clusters. It allows users to deploy and manage applications, inspect and debug clusters, and perform various administrative tasks.
- Answer:
- Explain the concept of Kubernetes Helm Charts and their structure.
- Answer: Helm Charts are packages of pre-configured Kubernetes resources. They contain a directory structure with files like
values.yaml
(configuration parameters),templates/
(Kubernetes YAML templates), and optional charts dependencies.
- Answer: Helm Charts are packages of pre-configured Kubernetes resources. They contain a directory structure with files like
- How does Kubernetes handle network communication between pods in different nodes of a cluster?
- Answer: Kubernetes uses a network overlay to enable communication between pods in different nodes. The Container Network Interface (CNI) standard is often used to implement networking solutions like Calico, Flannel, or Weave.
- What is the purpose of a Kubernetes ConfigMap, and how can it be used in applications?
- Answer: ConfigMaps store configuration data that can be consumed by applications running in a Kubernetes cluster. They can be used to separate configuration from application code, allowing for easier configuration changes.
- Explain the differences between a Kubernetes Deployment and a StatefulSet.
- Answer: Deployments are suitable for stateless applications, providing features like rolling updates. StatefulSets, on the other hand, are designed for stateful applications with unique identities, providing stable network identities and persistent storage.
- How can you scale a Kubernetes Deployment horizontally?
- Answer: You can scale a Deployment horizontally by adjusting the replica count. For example:
kubectl scale deployment my-deployment --replicas=3
- Answer: You can scale a Deployment horizontally by adjusting the replica count. For example:
- What are the differences between Kubernetes Pods and Containers?
- Answer: A Pod is the smallest deployable unit that can contain one or more containers sharing the same network namespace. Containers, on the other hand, are lightweight, standalone, and executable software packages that include everything needed to run a piece of software.
- Explain how you can expose a Kubernetes Service to the external world.
- Answer: Services can be exposed externally using the
NodePort
,LoadBalancer
, orIngress
resources. For example, usingNodePort
:kubectl expose service my-service --type=NodePort --port=80 --target-port=8080
- Answer: Services can be exposed externally using the
- How does Kubernetes manage application updates without downtime?
- Answer: Kubernetes manages updates without downtime using strategies like Rolling Updates. It gradually replaces old pods with new ones, ensuring a smooth transition and continuous availability.
- What is the purpose of Kubernetes Persistent Volumes (PVs) and Persistent Volume Claims (PVCs)?
- Answer: PVs provide an abstraction for physical storage in a cluster, while PVCs request storage resources from PVs. PVCs enable users to consume storage resources without worrying about the underlying details.
- Explain how Kubernetes manages secrets and sensitive information.
- Answer: Kubernetes Secrets store and manage sensitive information such as API keys or passwords. They can be mounted into pods as files or used as environment variables, and they are encoded in Base64 to enhance security.
- How do you troubleshoot a Kubernetes cluster, and what tools can be used for debugging?
- Answer: Troubleshooting involves checking pod logs, inspecting events, and using tools like
kubectl describe
,kubectl logs
, andkubectl exec
for debugging. Monitoring tools like Prometheus and Grafana can also provide insights into cluster health.
- Answer: Troubleshooting involves checking pod logs, inspecting events, and using tools like
Hands-On Kubernetes
- Write a
kubectl
command to deploy a YAML file containing a Pod.- Answer:
kubectl apply -f pod.yaml
- Answer:
- How can you scale a Deployment named
my-deployment
to have 5 replicas?- Answer:
kubectl scale deployment my-deployment --replicas=5
- Answer:
- Write a command to create a ConfigMap named
my-config
from a file namedconfig.txt
.- Answer:
kubectl create configmap my-config --from-file=config.txt
- Answer:
- How do you expose a service named
my-service
as a NodePort on port 30000?- Answer:
kubectl expose service my-service --type=NodePort --port=80 --target-port=8080
- Answer:
- Write a
kubectl
command to check the logs of a pod namedmy-pod
.- Answer:
kubectl logs my-pod
- Answer:
- How can you update an image for a Deployment named
my-deployment
to usenew-image:latest
?- Answer:
kubectl set image deployment/my-deployment my-container=new-image:latest
- Answer:
- Write a
kubectl
command to create a Secret namedmy-secret
with a key-value pair.- Answer:
kubectl create secret generic my-secret --from-literal=key1=value1
- Answer:
- How do you check the status of all Pods in a namespace using
kubectl
?- Answer:
kubectl get pods -n <namespace>
- Answer:
- Write a command to execute a shell command inside a running Pod named
my-pod
.- Answer:
kubectl exec -it my-pod -- /bin/sh
- Answer:
- How can you delete all resources in a namespace named
my-namespace
?- Answer:
kubectl delete all --all -n my-namespace
- Answer:
- Write a
kubectl
command to create a Persistent Volume (PV) from a YAML file.- Answer:
kubectl apply -f pv.yaml
- Answer:
- How do you check the events related to a specific resource in Kubernetes?
- Answer:
kubectl describe <resource_type> <resource_name>
- Answer:
- Write a
kubectl
command to create a Namespace namedmy-namespace
.- Answer:
kubectl create namespace my-namespace
- Answer:
- How can you run a simple NGINX web server in a Pod named
nginx-pod
using thenginx
image?- Answer:
kubectl run nginx-pod --image=nginx --restart=Never
- Answer:
- Write a command to apply a rolling update to a Deployment named
my-deployment
.- Answer:
kubectl set image deployment/my-deployment my-container=new-image:latest
- Answer:
- How do you check the current context and clusters in your
kubectl
configuration?- Answer:
kubectl config get-contexts kubectl config get-clusters
- Answer:
- Write a command to create a Horizontal Pod Autoscaler named
my-hpa
for a Deployment namedmy-deployment
.- Answer:
kubectl autoscale deployment my-deployment --cpu-percent=80 --min=1 --max=10 --name=my-hpa
- Answer:
- How can you manually delete a Pod named
my-pod
and ensure it gets replaced?- Answer:
kubectl delete pod my-pod
- Answer:
- Write a command to label a node named
my-node
with the labelenvironment=production
.- Answer:
kubectl label node my-node environment=production
- Answer:
- How do you check the storage classes available in a Kubernetes cluster?
- Answer:
kubectl get storageclasses
- Answer:
Backend Developer Interview Questions and Answers
1. What is object-oriented programming?
Answer: Object-oriented programming (OOP) is a programming paradigm based on the concept of “objects,” which are instances of classes. It combines data and the methods that operate on that data into a single unit called an object. OOP focuses on four main principles:
- Inheritance: Mechanism where a new class inherits properties and behavior from an existing class.
- Polymorphism: Ability to present the same interface for differing underlying forms (data types).
- Encapsulation: Bundling the data and the methods that operate on the data into a single unit, and restricting access to some of the object’s components.
- Abstraction: Hiding the complex implementation details and showing only the essential features of the object.
2. What is a constructor?
Answer: A constructor in Java is a special method used to initialize objects. It is called when an instance of a class is created. A constructor has the same name as the class and does not have a return type. It sets initial values for the object’s attributes.
3. What are the different types of constructors?
Answer: There are three main types of constructors in Java:
- No-Argument Constructor: A constructor that does not take any arguments.
- Default Constructor: A constructor that is automatically created by the compiler if no constructor is defined. It is a no-argument constructor.
- Parameterized Constructor: A constructor that takes arguments to initialize an object with specific values.
4. Why is multiple inheritance not possible in Java?
Answer: Multiple inheritance is not supported in Java to avoid the complexity and ambiguity caused by the “diamond problem.” This occurs when a class inherits from two classes that both inherit from a common superclass, leading to ambiguity in method and property inheritance. Java resolves this by allowing multiple inheritance of interfaces, but not classes.
5. How to create an immutable class?
Answer: To create an immutable class in Java:
- Declare the class as
final
to prevent subclassing. - Declare all fields as
private
to prevent direct access. - Declare all fields as
final
to ensure they are initialized only once. - Provide a constructor to initialize the fields.
- Do not provide setter methods.
- Perform deep copies of mutable objects in the constructor and in getter methods to ensure the internal state cannot be altered.
6. What is the difference between Wrapper Classes and Primitive Data Types?
Answer:
Wrapper classes in Java provide a way to use primitive data types (like int
, boolean
) as objects. Wrapper classes include Integer
, Boolean
, etc., which encapsulate the primitive types in an object. Primitive types are basic data types provided by the language (e.g., int
, boolean
) and are not objects.
7. What is serialization?
Answer: Serialization is the process of converting an object’s state into a byte stream so that it can be saved to a file, sent over a network, or stored in a database. Deserialization is the reverse process, converting a byte stream back into a copy of the original object. It is commonly used in Java for tasks involving data transfer and persistence.
8. What is the difference between finally
, final
, and finalize
?
Answer:
finally
: A block of code that follows atry-catch
block and executes regardless of whether an exception is thrown or not.final
: A keyword used to declare constants, prevent method overriding, and inheritance of a class.finalize
: A deprecated method that was used to perform clean-up before an object is garbage collected. It is recommended to use theAutoCloseable
interface for resource management in newer Java versions.
9. What is the difference between LinkedList
and ArrayList
?
Answer:
LinkedList
: Uses a doubly linked list data structure. Each element is a node that contains references to the previous and next nodes. Better for frequent insertions and deletions.ArrayList
: Uses a dynamic array to store elements. Provides faster random access due to index-based access. Better for frequent retrieval operations.- Performance:
- Insertion at the end: O(1) for both.
- Insertion at a given index: O(n) for both.
- Search by value: O(n) for both.
- Get by index: O(1) for
ArrayList
, O(n) forLinkedList
. - Removal: O(n) for both.
10. How does a HashMap
work?
Answer:
HashMap
in Java uses an array of buckets, each bucket is a linked list (or a tree in Java 8+ when the bucket exceeds a threshold). Hashing is used to convert a key into an array index (bucket). Each entry in the map contains a key, a value, and a link to the next entry in the bucket (if any). Collisions (when two keys hash to the same index) are handled by chaining entries in a linked list or tree within the same bucket.
11. What is a thread?
Answer: A thread is an independent path of execution within a program. Threads allow concurrent execution of two or more parts of a program to maximize utilization of CPU. Threads can perform tasks simultaneously, making applications more responsive.
12. What are the different types of threads?
Answer:
- User Threads: High-priority threads created by the application to perform tasks.
- Daemon Threads: Low-priority threads that run in the background to perform housekeeping tasks for the JVM, such as garbage collection.
13. What is the difference between a process and a thread?
Answer:
- Process: An instance of a running program. It has its own memory space and resources.
- Thread: A unit of execution within a process. Threads within the same process share the same memory space and resources but run independently.
14. What is exception handling?
Answer:
Exception handling in Java is a mechanism to handle runtime errors, allowing the normal flow of the program to be maintained. It is done using try
, catch
, finally
, and throw
blocks to handle exceptions like ClassNotFoundException
, IOException
, etc.
15. What are the different types of exceptions in Java?
Answer:
- Checked Exceptions: Exceptions that are checked at compile-time. The programmer must handle these exceptions, e.g.,
IOException
,SQLException
. - Unchecked Exceptions: Exceptions that occur at runtime and are not checked at compile-time, e.g.,
NullPointerException
,ArrayIndexOutOfBoundsException
.
16. What is a design pattern?
Answer: A design pattern is a reusable solution to a common problem in software design. It provides a template for solving problems that can be used in different situations. Design patterns are categorized into Creational, Structural, and Behavioral patterns.
17. What are the types of Creational design patterns?
Answer: The five well-known Creational design patterns are:
- Builder Pattern
- Abstract Factory Pattern
- Factory Method Pattern
- Singleton Pattern
- Prototype Pattern
18. What is a lambda expression?
Answer:
A lambda expression in Java is a concise way to represent an anonymous function (a block of code with parameters). It provides a clear and concise syntax to write inline implementations of functional interfaces. Example: (parameters) -> expression
.