Kafka Quiz

What is Apache Kafka?
A centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
A high-throughput distributed messaging system.
A change management system with package management and workflow integration for seamless project oversight.
A centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
What are the three main components of Kafka?
Publishers, engine, subscribers.
Producers, brokers, and consumers.
Event raisers, event management system, event handlers.
Messengers, message queue, message handlers.
What is a group of Kafka brokers called?
Kafka cluster
Kafka group
Peanut brittle
Brokerage
What coordinates interactions between brokers in a Kafka cluster?
Kafka Broker Management Server
Leader node
Zookeeper
They fight among themselves until a victor is crowned Highlander style.
What is a logical grouping of records (messages) called in Kafka?
Record Group
Event Name
Queue Name
Topic
How are topics distributed across Kafka brokers?
By the creation of partitions, each of which is served by one broker, and is a log of records which are sent to that partition.
By the creation of partitions, distributed across brokers.
By the creating a new partition on each broker, with accompanying log file of records.
By creating a number of partitions equal to half the number of brokers and making redundant partitions on the nodes not acting as primary.
How does a Kafka partition work?
In concert, partitions act as a distributed commit log, each of which has it's own offset for committed record. Records are not replicated across partitions, but partitions may be replicated for redundancy (although brokers with replicated partitions are not primary).
Partitions each act as a FIFO queue, and do not retain information after it has been sent to a consumer.
Partitions all replicate records and eagerly send them to consumers, notifying the leader node when a record has been consumed.
In concert, partitions act as a distributed commit log, which is coordinated by the topic leader, a broker which keeps track of the order of records.
How is using Kafka as a FIFO queue achievable?
Trick question, it is not achievable.
By configuring a topic to use distributed partitions, where a leader broker maintains global order of records.
By using the QueueConsumer implementation of the consumer client, which instructs brokers to coordinate to find the next record which should be sent to the consumer.
By creating only one partition for a topic, thereby retaining global ordering at the cost of using a single broker for that topic.
A ProducerRecord requires which properties?
A topic and a value.
Topic, key, and value.
Topic, key, value and timestamp.
Topic, key, value, timestamp and partition.
How is the partition a record is sent to determined?
By the partition property set when the ProducerRecord is created.
By the key set when the ProducerRecord is created.
In a round-robin fashion.
All of the above.
How are records sent from producers to the Kafka cluster?
Records are sent in mini-batches which are per partition but have a max linger setting so they will not "stall out" at the producer.
Using a mini-batching scheme which uses a RecordAcumulator, configured with a max buffer size, and a per-topic set of RecrodBatch objects, which are configured with a max batch size.
Records are sent eagerly, as soon as they are available.
Records are batched regardless of topic, and are sent at a set, configurable batch interval.
Which of the following is a true statement?
Consumers use either subscribe(), which is per-topic, or assign(), which is per-partition, to signal the Kafka cluster what records it wants to consume. Getting records is accomplished using poll(), an asynchronous function which requires a callback to process the returned records.
Consumers use either subscribe(), which is per-topic, or assign(), which is per-partition, to signal the Kafka cluster what records it wants to consume. Getting records is accomplished using poll() , a single-threaded, blocking function.
Consumers always use subscribe() to signal the Kafka cluster what records it wants to consume, which is on a per-topic basis. Getting records is accomplished using poll() , a single-threaded, blocking function.
Consumers always use assign() to signal the Kafka cluster what records it wants to consume, which is on a per-partition basis. Getting records is accomplished using poll(), an asynchronous function which requires a callback to process the returned records.
Regarding partition offsets, which of the following is true?
Last committed and current position offsets are established. By default, Kafka consumers must signal when a record is committed.
Last committed and current position offsets are established. By default, Kafka commits a record 5 seconds after it has been sent to a consumer, and the amount of time Kafka waits before it commits is configurable, but consumers cannot enact a commit.
Last committed and current position offsets are established. By default, Kafka auto-commits a record 5 seconds after it has been sent to a consumer, but auto commit may be disable so consumers must signal when a record is committed.
Only the current position offset is established.
How do Kafka consumer groups work?
All consumers in a group have the same group.id set, and are managed by the GroupCoordinator, a broker node nominated for the purpose. Consumer group members each handle whole partitions, and it is possible to have too many partitions and so some members are overworked, or too few and have consumers which do no work.
All consumers in a group have the same group.id set, and are managed by the GroupCoordinator, a broker node nominated for the purpose. Consumer group members handle partitions in a round robin fashion, as directed by the GroupCoordinator, so that each member will have at least some work.
All consumers in a group have the same group.id set, and when they call poll(), the Kafka cluster sends them records which may be from multiple partitions in a coordinated fashion.
Consumer groups require no configuration except that they subscribe() or assign() to the same topics/partitions; the Kafka cluster handles the rest. When a consumer in the group calls poll(), the Kafka cluster sends them records which may be from multiple partitions in a coordinated fashion.
When might a group coordinator rebalance a consumer group?
Only when a consumer leaves the group, based on not sending a heartbeat.
Only when a new partition is added to a topic.
When a consumer leaves the group, based on not sending a heartbeat; or when a new partition is added to a topic.
When a consumer outside the consumer group subscribes to a topic.
{"name":"Kafka Quiz", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"What is Apache Kafka?, What are the three main components of Kafka?, What is a group of Kafka brokers called?","img":"https://www.quiz-maker.com/3012/images/ogquiz.png"}
Powered by: Quiz Maker