Spring boot集成Kafka消息中間件代碼實例
一.創建Spring boot項目,添加如下依賴
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> <!-- https://mvnrepository.com/artifact/org.springframework.kafka/spring-kafka --> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients --> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.41</version> </dependency>
二.配置文件
server.port=4400
#kafka配置#============== kafka ===================# 指定kafka 代理地址,可以多個spring.kafka.bootstrap-servers=192.168.102.88:9092# 指定默認消費者group idspring.kafka.consumer.group-id=jkafka.demo#earliest 當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,從頭開始消費#latest 當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,消費新產生的該分區下的數據#none topic各分區都存在已提交的offset時,從offset后開始消費;只要有一個分區不存在已提交的offset,則拋出異常spring.kafka.consumer.auto-offset-reset=latestspring.kafka.consumer.enable-auto-commit=falsespring.kafka.consumer.auto-commit-interval=100# 指定消費者消息key和消息體的編解碼方式spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
三.編輯消息實體
@Datapublic class Message implements Serializable{ /** * */ private static final long serialVersionUID = 2522280475099635810L; //消息ID private String id; //消息內容 private String msg; // 消息發送時間 private Date sendTime;}
四.消息發送類
@Componentpublic class KfkaProducer { private static Logger logger = LoggerFactory.getLogger(KfkaProducer.class); @Autowired private KafkaTemplate<String, String> kafkaTemplate; public void send(String topic,Message message) { try { logger.info('正在發送消息...'); kafkaTemplate.send(topic,JSON.toJSONString(message)); logger.info('發送消息成功 ----->>>>> message = {}', JSON.toJSONString(message)); } catch (Exception e) { e.getMessage(); } }}
五.發現監聽接收類
@Componentpublic class KfkaListener { private static Logger logger = LoggerFactory.getLogger(KfkaListener.class); @KafkaListener(topics = {'hello'}) public void listen(ConsumerRecord<?, ?> record) { Optional<?> kafkaMessage = Optional.ofNullable(record.value()); if (kafkaMessage.isPresent()) { Object message = kafkaMessage.get(); logger.info('接收消息------------ record =' + record); logger.info('接收消息----------- message =' + message); } }}
六.定時發送信息測試類
@EnableScheduling@Componentpublic class PublisherController { private static final Logger log = LoggerFactory.getLogger(PublisherController.class); @Autowired private KfkaProducer kfkaProducer; @Scheduled(fixedRate = 5000) public void pubMsg() { Message msg=new Message(); msg.setId(UUID.randomUUID().toString()); msg.setMsg('發送這條消息給你,你好啊!!!!!!'); msg.setSendTime(new Date()); kfkaProducer.send('hello', msg);; log.info('Publisher sendes Topic... '); }}
七.測試結果
以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持好吧啦網。
相關文章: