Compare commits

..

4 Commits

Author SHA1 Message Date
c922023740 fix(db): 修复表列标识符处理逻辑确保聚合函数正常工作
- 修改 ProcessColumn 方法,table.column 格式不再添加反引号,只添加前缀
- 修改 ProcessColumnNoPrefix 方法,table.column 格式不加引号保持原始格式
- 更新 ProcessConditionString 方法,条件字符串中的 table.column 不加反引号
- 修复聚合函数如 Sum、Avg、Max、Min 在 table.column 格式下的正确性
- 添加完整的 table.column 格式测试用例验证功能一致性
- 确保返回的列名与原始列名一致,便于聚合函数等场景读取结果
2026-02-02 11:02:59 +08:00
9949231d7c feat(app): 添加测试功能并增强错误处理
- 在应用路由中新增 test 接口用于显示测试数据
- 引入 errors 包以支持错误处理机制
- 在上下文响应处理中集成错误对象设置
- 实现 JSON 错误消息的自动转换与存储
- 扩展响应结构以同时支持 result 和 error 字段
2026-01-31 02:05:13 +08:00
3fd0975427 refactor(log): 优化调用栈查找逻辑以提高准确性
- 注释掉批量缓存操作测试的调用,避免不必要的执行
- 改进 findCaller 函数,增加对 application.go 的优先记录逻辑
- 确保返回的调用者信息优先考虑应用层代码,提升日志信息的准确性
- 增强框架文件的过滤机制,确保更清晰的调用栈信息
2026-01-30 22:17:53 +08:00
3d83c41905 feat(cache): 增加批量操作支持以提升性能
- 在 HoTimeCache 中新增 SessionsGet、SessionsSet 和 SessionsDelete 方法,支持批量获取、设置和删除 Session 缓存
- 优化缓存逻辑,减少数据库写入次数,提升性能
- 更新文档,详细说明批量操作的使用方法和性能对比
- 添加调试日志记录,便于追踪批量操作的执行情况
2026-01-30 17:51:43 +08:00
14 changed files with 2242 additions and 68 deletions

View File

@ -0,0 +1,221 @@
---
name: 缓存表模式配置
overview: 为 CacheDb 添加 mode 配置项,支持 "new"(默认,只用新表)和 "compatible"(写新读老)两种模式,并更新配置说明。
todos:
- id: add-mode-field
content: 在 CacheDb 结构体中添加 Mode 字段
status: completed
- id: modify-init
content: 修改 initDbTable根据 Mode 决定是否迁移删除老表
status: completed
dependencies:
- add-mode-field
- id: add-legacy-get
content: 添加 getLegacy 方法读取老表数据unix时间戳格式
status: completed
dependencies:
- add-mode-field
- id: modify-get
content: 修改 get 方法compatible 模式下回退读取老表
status: completed
dependencies:
- add-legacy-get
- id: modify-set
content: 修改 set 方法compatible 模式下写新表后删除老表同key记录
status: completed
dependencies:
- add-mode-field
- id: modify-delete
content: 修改 delete 方法compatible 模式下同时删除新表和老表
status: completed
dependencies:
- add-mode-field
- id: update-cache-init
content: 在 cache.go Init 方法中读取 mode 配置
status: completed
dependencies:
- add-mode-field
- id: update-config-note
content: 在 var.go ConfigNote 中添加 mode 配置说明
status: completed
- id: add-cache-test
content: 在 example/main.go 中添加缓存测试路由
status: completed
dependencies:
- modify-get
- modify-set
- modify-delete
- update-cache-init
- id: todo-1769763169689-k7t9twp5t
content: |
QUICKSTART.md 更新:缓存配置部分需要添加 mode 和 history 配置说明
status: pending
---
# 缓存表模式配置实现计划
## 需求概述
在 [`cache/cache_db.go`](cache/cache_db.go) 中实现两种缓存表模式:
- **new**(默认):只使用新的 `hotime_cache` 表,自动迁移老表数据
- **compatible**:写入新表,读取时先查新表再查老表,老数据自然过期消亡
## 实现步骤
### 1. 修改 CacheDb 结构体
在 [`cache/cache_db.go`](cache/cache_db.go) 中添加 `Mode` 字段:
```go
type CacheDb struct {
TimeOut int64
DbSet bool
SessionSet bool
HistorySet bool
Mode string // "new"(默认) 或 "compatible"
Db HoTimeDBInterface
// ...
}
```
### 2. 修改初始化逻辑 initDbTable
- **new 模式**:创建新表、迁移老表数据,**但不删除老表**(删除交给用户手动操作,更安全)
- **compatible 模式**:创建新表,不迁移也不删除老表
两种模式都不自动删除老表,避免自动删除造成数据丢失风险
### 3. 修改 get 方法
- **new 模式**:只从新表读取
- **compatible 模式**:先从新表读取,如果没有再从老表读取
需要新增 `getLegacy` 方法来读取老表数据,该方法需要:
1. 查询老表数据
2. 检查 `endtime`unix 时间戳)是否过期
3. 如果过期:删除该条记录,返回 nil
4. 如果未过期:返回数据
### 4. 修改 set 方法(写新删老)
- **new 模式**:只写新表(老表保留但不再管理)
- **compatible 模式**:写入新表 + 删除老表中相同 key 的记录
这样可以主动加速老数据消亡,而不是等自然过期。
### 5. 修改 delete 方法
- **new 模式**:只删除新表(老表保留但不再管理)
- **compatible 模式**:同时删除新表和老表中的 key
这是必要的,否则删除新表后,下次读取会回退读到老表的数据,造成"删不掉"的问题。
需要新增 `deleteLegacy` 方法处理老表删除逻辑
### 6. 更新缓存初始化
在 [`cache/cache.go`](cache/cache.go) 的 `Init` 方法中读取 `mode` 配置:
```go
that.dbCache = &CacheDb{
// ...
Mode: db.GetString("mode"), // 读取 mode 配置
}
```
### 7. 更新配置说明
在 [`var.go`](var.go) 的 `ConfigNote` 中添加 `mode` 配置说明:
```go
"db": Map{
// ...
"mode": "默认new非必须new为只使用新表自动迁移老数据compatible为兼容模式写新表读老表老数据自然过期",
}
```
### 8. 编写测试
在 [`example/main.go`](example/main.go) 中添加缓存测试路由,测试覆盖:
**new 模式测试:**
- 基础读写set/get/delete 正常工作
- 过期测试:设置短过期时间,验证过期后读取返回 nil
- 数据迁移:验证老表数据能正确迁移到新表
- 老表保留:验证迁移后老表仍存在(不自动删除)
**compatible 模式测试:**
- 新表读写:优先从新表读取
- 老表回退:新表没有时从老表读取
- 过期检测:读取老表过期数据时返回 nil 并删除该记录
- 写新删老:写入新表后老表同 key 记录被删除
- 删除双表:删除操作同时删除新表和老表记录
- 通配删除:`key*` 格式删除测试
**边界情况测试:**
- 空值处理nil 值的 set/get
- 不存在的 key 读取
- 重复 set 同一个 key
- 超时时间参数测试(默认/自定义)
## 架构图
```mermaid
flowchart TD
subgraph Config[配置]
ModeNew["mode: new (默认)"]
ModeCompat["mode: compatible"]
end
subgraph NewMode[new模式]
N1[初始化] --> N2[创建新表]
N2 --> N3{老表存在?}
N3 -->|是| N4[迁移数据到新表]
N4 --> N5[保留老表由人工删除]
N3 -->|否| N6[完成]
N5 --> N6
NR[读取] --> NR1[只查询新表]
NW[写入] --> NW1[只写入新表]
ND[删除] --> ND1[只删除新表记录]
end
subgraph CompatMode[compatible模式]
C1[初始化] --> C2[创建新表]
C2 --> C3[保留老表]
CR[读取] --> CR1[查询新表]
CR1 -->|未找到| CR2[查询老表]
CR2 --> CR3{过期?}
CR3 -->|是| CR4[删除老表记录]
CR4 --> CR5[返回nil]
CR3 -->|否| CR6[返回数据]
CW[写入] --> CW1[写入新表]
CW1 --> CW2[删除老表同key]
CD[删除] --> CD1[删除新表记录]
CD1 --> CD2[删除老表记录]
end
```
## 文件修改列表
| 文件 | 修改内容 |
|------|----------|
| [`cache/cache_db.go`](cache/cache_db.go) | 添加 Mode 字段、修改 initDbTable、添加 getLegacy含过期检测删除、修改 get、修改 set写新删老、修改 delete |
| [`cache/cache.go`](cache/cache.go) | 读取 mode 配置 |
| [`var.go`](var.go) | ConfigNote 添加 mode 说明 |
| [`example/main.go`](example/main.go) | 添加缓存测试路由 |
| [`example/config/config.json`](example/config/config.json) | 可选:添加 mode 配置示例 |

323
cache/cache.go vendored
View File

@ -1,11 +1,36 @@
package cache
import (
"encoding/json"
"errors"
"os"
"time"
. "code.hoteas.com/golang/hotime/common"
)
const debugLogPath = `d:\work\hotimev1.5\.cursor\debug.log`
// debugLog 写入调试日志
func debugLog(hypothesisId, location, message string, data map[string]interface{}) {
// #region agent log
logFile, _ := os.OpenFile(debugLogPath, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if logFile != nil {
logEntry, _ := json.Marshal(map[string]interface{}{
"sessionId": "cache-debug",
"runId": "test-run",
"hypothesisId": hypothesisId,
"location": location,
"message": message,
"data": data,
"timestamp": time.Now().UnixMilli(),
})
logFile.Write(append(logEntry, '\n'))
logFile.Close()
}
// #endregion
}
// HoTimeCache 可配置memorydbredis默认启用memory默认优先级为memory>redis>db,memory与数据库缓存设置项一致
// 缓存数据填充会自动反方向反哺加入memory缓存过期将自动从redis更新但memory永远不会更新redis如果是集群建议不要开启memory配置即启用
type HoTimeCache struct {
@ -181,6 +206,299 @@ func (that *HoTimeCache) Cache(key string, data ...interface{}) *Obj {
return reData
}
// SessionsGet 批量获取 Session 缓存
// 返回 Mapkey 为缓存键value 为缓存值
// 优先级memory > redis > db低优先级数据会反哺到高优先级缓存
func (that *HoTimeCache) SessionsGet(keys []string) Map {
if len(keys) == 0 {
return Map{}
}
// #region agent log
debugLog("D", "cache.go:SessionsGet:start", "SessionsGet开始", map[string]interface{}{
"keys_count": len(keys),
"has_memory": that.memoryCache != nil,
"has_redis": that.redisCache != nil,
"has_db": that.dbCache != nil,
})
// #endregion
result := make(Map, len(keys))
missingKeys := keys
// 从 memory 获取
if that.memoryCache != nil && that.memoryCache.SessionSet {
memResult := that.memoryCache.CachesGet(keys)
// #region agent log
debugLog("D", "cache.go:SessionsGet:memory", "从Memory获取", map[string]interface{}{
"found_count": len(memResult),
})
// #endregion
for k, v := range memResult {
result[k] = v
}
// 计算未命中的 keys
missingKeys = make([]string, 0)
for _, k := range keys {
if _, exists := result[k]; !exists {
missingKeys = append(missingKeys, k)
}
}
}
// 从 redis 获取未命中的
if len(missingKeys) > 0 && that.redisCache != nil && that.redisCache.SessionSet {
redisResult := that.redisCache.CachesGet(missingKeys)
// #region agent log
debugLog("D", "cache.go:SessionsGet:redis", "从Redis获取", map[string]interface{}{
"missing_count": len(missingKeys),
"found_count": len(redisResult),
})
// #endregion
// 反哺到 memory
if that.memoryCache != nil && that.memoryCache.SessionSet && len(redisResult) > 0 {
that.memoryCache.CachesSet(redisResult)
// #region agent log
debugLog("D", "cache.go:SessionsGet:backfill_redis_to_mem", "Redis数据反哺到Memory", map[string]interface{}{
"backfill_count": len(redisResult),
})
// #endregion
}
for k, v := range redisResult {
result[k] = v
}
// 更新未命中的 keys
newMissing := make([]string, 0)
for _, k := range missingKeys {
if _, exists := result[k]; !exists {
newMissing = append(newMissing, k)
}
}
missingKeys = newMissing
}
// 从 db 获取未命中的
if len(missingKeys) > 0 && that.dbCache != nil && that.dbCache.SessionSet {
dbResult := that.dbCache.CachesGet(missingKeys)
// #region agent log
debugLog("D", "cache.go:SessionsGet:db", "从DB获取", map[string]interface{}{
"missing_count": len(missingKeys),
"found_count": len(dbResult),
})
// #endregion
// 反哺到 memory 和 redis
if len(dbResult) > 0 {
if that.memoryCache != nil && that.memoryCache.SessionSet {
that.memoryCache.CachesSet(dbResult)
// #region agent log
debugLog("D", "cache.go:SessionsGet:backfill_db_to_mem", "DB数据反哺到Memory", map[string]interface{}{
"backfill_count": len(dbResult),
})
// #endregion
}
if that.redisCache != nil && that.redisCache.SessionSet {
that.redisCache.CachesSet(dbResult)
// #region agent log
debugLog("D", "cache.go:SessionsGet:backfill_db_to_redis", "DB数据反哺到Redis", map[string]interface{}{
"backfill_count": len(dbResult),
})
// #endregion
}
}
for k, v := range dbResult {
result[k] = v
}
}
// #region agent log
debugLog("D", "cache.go:SessionsGet:end", "SessionsGet完成", map[string]interface{}{
"total_found": len(result),
})
// #endregion
return result
}
// SessionsSet 批量设置 Session 缓存
// data: Mapkey 为缓存键value 为缓存值
func (that *HoTimeCache) SessionsSet(data Map) {
if len(data) == 0 {
return
}
// #region agent log
debugLog("A", "cache.go:SessionsSet:start", "SessionsSet开始", map[string]interface{}{
"data_count": len(data),
"has_memory": that.memoryCache != nil,
"has_redis": that.redisCache != nil,
"has_db": that.dbCache != nil,
})
// #endregion
if that.memoryCache != nil && that.memoryCache.SessionSet {
that.memoryCache.CachesSet(data)
// #region agent log
debugLog("A", "cache.go:SessionsSet:memory", "写入Memory完成", map[string]interface{}{"count": len(data)})
// #endregion
}
if that.redisCache != nil && that.redisCache.SessionSet {
that.redisCache.CachesSet(data)
// #region agent log
debugLog("A", "cache.go:SessionsSet:redis", "写入Redis完成", map[string]interface{}{"count": len(data)})
// #endregion
}
if that.dbCache != nil && that.dbCache.SessionSet {
that.dbCache.CachesSet(data)
// #region agent log
debugLog("A", "cache.go:SessionsSet:db", "写入DB完成", map[string]interface{}{"count": len(data)})
// #endregion
}
// #region agent log
debugLog("A", "cache.go:SessionsSet:end", "SessionsSet完成", nil)
// #endregion
}
// SessionsDelete 批量删除 Session 缓存
func (that *HoTimeCache) SessionsDelete(keys []string) {
if len(keys) == 0 {
return
}
// #region agent log
debugLog("C", "cache.go:SessionsDelete:start", "SessionsDelete开始", map[string]interface{}{
"keys_count": len(keys),
"has_memory": that.memoryCache != nil,
"has_redis": that.redisCache != nil,
"has_db": that.dbCache != nil,
})
// #endregion
if that.memoryCache != nil && that.memoryCache.SessionSet {
that.memoryCache.CachesDelete(keys)
// #region agent log
debugLog("C", "cache.go:SessionsDelete:memory", "从Memory删除完成", map[string]interface{}{"count": len(keys)})
// #endregion
}
if that.redisCache != nil && that.redisCache.SessionSet {
that.redisCache.CachesDelete(keys)
// #region agent log
debugLog("C", "cache.go:SessionsDelete:redis", "从Redis删除完成", map[string]interface{}{"count": len(keys)})
// #endregion
}
if that.dbCache != nil && that.dbCache.SessionSet {
that.dbCache.CachesDelete(keys)
// #region agent log
debugLog("C", "cache.go:SessionsDelete:db", "从DB删除完成", map[string]interface{}{"count": len(keys)})
// #endregion
}
// #region agent log
debugLog("C", "cache.go:SessionsDelete:end", "SessionsDelete完成", nil)
// #endregion
}
// CachesGet 批量获取普通缓存
// 返回 Mapkey 为缓存键value 为缓存值
// 优先级memory > redis > db低优先级数据会反哺到高优先级缓存
func (that *HoTimeCache) CachesGet(keys []string) Map {
if len(keys) == 0 {
return Map{}
}
result := make(Map, len(keys))
missingKeys := keys
// 从 memory 获取
if that.memoryCache != nil {
memResult := that.memoryCache.CachesGet(keys)
for k, v := range memResult {
result[k] = v
}
// 计算未命中的 keys
missingKeys = make([]string, 0)
for _, k := range keys {
if _, exists := result[k]; !exists {
missingKeys = append(missingKeys, k)
}
}
}
// 从 redis 获取未命中的
if len(missingKeys) > 0 && that.redisCache != nil {
redisResult := that.redisCache.CachesGet(missingKeys)
// 反哺到 memory
if that.memoryCache != nil && len(redisResult) > 0 {
that.memoryCache.CachesSet(redisResult)
}
for k, v := range redisResult {
result[k] = v
}
// 更新未命中的 keys
newMissing := make([]string, 0)
for _, k := range missingKeys {
if _, exists := result[k]; !exists {
newMissing = append(newMissing, k)
}
}
missingKeys = newMissing
}
// 从 db 获取未命中的
if len(missingKeys) > 0 && that.dbCache != nil {
dbResult := that.dbCache.CachesGet(missingKeys)
// 反哺到 memory 和 redis
if len(dbResult) > 0 {
if that.memoryCache != nil {
that.memoryCache.CachesSet(dbResult)
}
if that.redisCache != nil {
that.redisCache.CachesSet(dbResult)
}
}
for k, v := range dbResult {
result[k] = v
}
}
return result
}
// CachesSet 批量设置普通缓存
// data: Mapkey 为缓存键value 为缓存值
func (that *HoTimeCache) CachesSet(data Map) {
if len(data) == 0 {
return
}
if that.memoryCache != nil {
that.memoryCache.CachesSet(data)
}
if that.redisCache != nil {
that.redisCache.CachesSet(data)
}
if that.dbCache != nil {
that.dbCache.CachesSet(data)
}
}
// CachesDelete 批量删除普通缓存
func (that *HoTimeCache) CachesDelete(keys []string) {
if len(keys) == 0 {
return
}
if that.memoryCache != nil {
that.memoryCache.CachesDelete(keys)
}
if that.redisCache != nil {
that.redisCache.CachesDelete(keys)
}
if that.dbCache != nil {
that.dbCache.CachesDelete(keys)
}
}
func (that *HoTimeCache) Init(config Map, hotimeDb HoTimeDBInterface, err ...*Error) {
//防止空数据问题
if config == nil {
@ -263,6 +581,10 @@ func (that *HoTimeCache) Init(config Map, hotimeDb HoTimeDBInterface, err ...*Er
if db.Get("timeout") == nil {
db["timeout"] = 60 * 60 * 24 * 30
}
// mode 默认为 "compatible"(兼容模式,便于老系统平滑升级)
if db.Get("mode") == nil {
db["mode"] = CacheModeCompatible
}
that.Config["db"] = db
that.dbCache = &CacheDb{
@ -270,6 +592,7 @@ func (that *HoTimeCache) Init(config Map, hotimeDb HoTimeDBInterface, err ...*Er
DbSet: db.GetBool("db"),
SessionSet: db.GetBool("session"),
HistorySet: db.GetBool("history"),
Mode: db.GetString("mode"),
Db: hotimeDb,
}

327
cache/cache_db.go vendored
View File

@ -3,17 +3,41 @@ package cache
import (
"database/sql"
"encoding/json"
"os"
"strings"
"time"
. "code.hoteas.com/golang/hotime/common"
)
// debugLogDb 写入调试日志
func debugLogDb(hypothesisId, location, message string, data map[string]interface{}) {
// #region agent log
logFile, _ := os.OpenFile(`d:\work\hotimev1.5\.cursor\debug.log`, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if logFile != nil {
logEntry, _ := json.Marshal(map[string]interface{}{
"sessionId": "cache-db-debug",
"runId": "test-run",
"hypothesisId": hypothesisId,
"location": location,
"message": message,
"data": data,
"timestamp": time.Now().UnixMilli(),
})
logFile.Write(append(logEntry, '\n'))
logFile.Close()
}
// #endregion
}
// 表名常量
const (
CacheTableName = "hotime_cache"
CacheHistoryTableName = "hotime_cache_history"
LegacyCacheTableName = "cached" // 老版本缓存表名
DefaultCacheTimeout = 24 * 60 * 60 // 默认过期时间 24 小时
CacheModeNew = "new" // 新模式:只使用新表
CacheModeCompatible = "compatible" // 兼容模式:写新读老
)
type HoTimeDBInterface interface {
@ -32,7 +56,8 @@ type CacheDb struct {
TimeOut int64
DbSet bool
SessionSet bool
HistorySet bool // 是否开启历史记录
HistorySet bool // 是否开启历史记录
Mode string // 缓存模式:"new"(默认,只用新表) 或 "compatible"(写新读老)
Db HoTimeDBInterface
*Error
ContextBase
@ -57,6 +82,24 @@ func (that *CacheDb) getHistoryTableName() string {
return that.Db.GetPrefix() + CacheHistoryTableName
}
// getLegacyTableName 获取带前缀的老版本缓存表名
func (that *CacheDb) getLegacyTableName() string {
return that.Db.GetPrefix() + LegacyCacheTableName
}
// isCompatibleMode 是否为兼容模式
func (that *CacheDb) isCompatibleMode() bool {
return that.Mode == CacheModeCompatible
}
// getEffectiveMode 获取有效模式(默认为 new
func (that *CacheDb) getEffectiveMode() string {
if that.Mode == CacheModeCompatible {
return CacheModeCompatible
}
return CacheModeNew
}
// initDbTable 初始化数据库表
func (that *CacheDb) initDbTable() {
if that.isInit {
@ -66,17 +109,22 @@ func (that *CacheDb) initDbTable() {
dbType := that.Db.GetType()
tableName := that.getTableName()
historyTableName := that.getHistoryTableName()
legacyTableName := that.getLegacyTableName()
// 检查并创建主表
if !that.tableExists(tableName) {
that.createMainTable(dbType, tableName)
}
// 检查并迁移旧 cached 表
oldTableName := that.Db.GetPrefix() + "cached"
if that.tableExists(oldTableName) {
that.migrateFromCached(dbType, oldTableName, tableName)
// 根据模式处理老表
// new 模式:迁移老表数据到新表(不删除老表,由人工删除)
// compatible 模式:不迁移,老表继续使用
if that.getEffectiveMode() == CacheModeNew {
if that.tableExists(legacyTableName) {
that.migrateFromCached(dbType, legacyTableName, tableName)
}
}
// compatible 模式不做任何处理,老表保留供读取
// 检查并创建历史表(开启历史记录时)
if that.HistorySet && !that.tableExists(historyTableName) {
@ -212,14 +260,15 @@ func (that *CacheDb) createHistoryTable(dbType, tableName string) {
that.Db.Exec(createSQL)
}
// migrateFromCached 从旧 cached 表迁移数据
// migrateFromCached 从旧 cached 表迁移数据(不删除老表,由人工删除)
func (that *CacheDb) migrateFromCached(dbType, oldTableName, newTableName string) {
var migrateSQL string
switch dbType {
case "mysql":
// 去重迁移:取每个 key 的最后一条记录id 最大)
migrateSQL = "INSERT INTO `" + newTableName + "` (`key`, `value`, `end_time`, `state`, `create_time`, `modify_time`) " +
// 使用 INSERT IGNORE 避免重复 key 冲突
migrateSQL = "INSERT IGNORE INTO `" + newTableName + "` (`key`, `value`, `end_time`, `state`, `create_time`, `modify_time`) " +
"SELECT c.`key`, c.`value`, FROM_UNIXTIME(c.`endtime`), 0, " +
"FROM_UNIXTIME(c.`time` / 1000000000), FROM_UNIXTIME(c.`time` / 1000000000) " +
"FROM `" + oldTableName + "` c " +
@ -227,7 +276,7 @@ func (that *CacheDb) migrateFromCached(dbType, oldTableName, newTableName string
"ON c.id = m.max_id"
case "sqlite":
migrateSQL = `INSERT INTO "` + newTableName + `" ("key", "value", "end_time", "state", "create_time", "modify_time") ` +
migrateSQL = `INSERT OR IGNORE INTO "` + newTableName + `" ("key", "value", "end_time", "state", "create_time", "modify_time") ` +
`SELECT c."key", c."value", datetime(c."endtime", 'unixepoch'), 0, ` +
`datetime(c."time" / 1000000000, 'unixepoch'), datetime(c."time" / 1000000000, 'unixepoch') ` +
`FROM "` + oldTableName + `" c ` +
@ -240,22 +289,12 @@ func (that *CacheDb) migrateFromCached(dbType, oldTableName, newTableName string
`to_timestamp(c."time" / 1000000000), to_timestamp(c."time" / 1000000000) ` +
`FROM "` + oldTableName + `" c ` +
`INNER JOIN (SELECT "key", MAX(id) as max_id FROM "` + oldTableName + `" GROUP BY "key") m ` +
`ON c.id = m.max_id`
`ON c.id = m.max_id ` +
`ON CONFLICT ("key") DO NOTHING`
}
// 执行迁移
_, err := that.Db.Exec(migrateSQL)
if err.GetError() == nil {
// 迁移成功,删除旧表
var dropSQL string
switch dbType {
case "mysql":
dropSQL = "DROP TABLE `" + oldTableName + "`"
case "sqlite", "postgres":
dropSQL = `DROP TABLE "` + oldTableName + `"`
}
that.Db.Exec(dropSQL)
}
// 执行迁移,不删除老表(由人工确认后删除,更安全)
that.Db.Exec(migrateSQL)
}
// writeHistory 写入历史记录
@ -288,25 +327,31 @@ func (that *CacheDb) writeHistory(key string) {
that.Db.Insert(historyTableName, historyData)
}
// get 获取缓存
func (that *CacheDb) get(key string) interface{} {
tableName := that.getTableName()
cached := that.Db.Get(tableName, "*", Map{"key": key})
// getLegacy 从老表获取缓存(兼容模式使用)
// 老表结构key, value, endtime(unix秒), time(纳秒时间戳)
func (that *CacheDb) getLegacy(key string) interface{} {
legacyTableName := that.getLegacyTableName()
// 检查老表是否存在
if !that.tableExists(legacyTableName) {
return nil
}
cached := that.Db.Get(legacyTableName, "*", Map{"key": key})
if cached == nil {
return nil
}
// 使用字符串比较判断过期ISO 格式天然支持)
endTime := cached.GetString("end_time")
nowTime := Time2Str(time.Now())
if endTime != "" && endTime <= nowTime {
// 惰性删除:过期只返回 nil不立即删除
// 依赖随机清理批量删除过期数据
// 检查过期时间(老表使用 unix 时间戳
endTime := cached.GetInt64("endtime")
nowUnix := time.Now().Unix()
if endTime > 0 && endTime <= nowUnix {
// 已过期,删除该条记录
that.deleteLegacy(key)
return nil
}
// 直接解析 value不再需要 {"data": value} 包装
// 解析 value老表 value 格式可能是 {"data": xxx} 或直接值)
valueStr := cached.GetString("value")
if valueStr == "" {
return nil
@ -318,9 +363,69 @@ func (that *CacheDb) get(key string) interface{} {
return nil
}
// 兼容老版本 {"data": xxx} 包装格式
if dataMap, ok := data.(map[string]interface{}); ok {
if innerData, exists := dataMap["data"]; exists {
return innerData
}
}
return data
}
// deleteLegacy 从老表删除缓存(兼容模式使用)
func (that *CacheDb) deleteLegacy(key string) {
legacyTableName := that.getLegacyTableName()
// 检查老表是否存在
if !that.tableExists(legacyTableName) {
return
}
del := strings.Index(key, "*")
// 如果通配删除
if del != -1 {
keyPrefix := Substr(key, 0, del)
that.Db.Delete(legacyTableName, Map{"key[~]": keyPrefix + "%"})
} else {
that.Db.Delete(legacyTableName, Map{"key": key})
}
}
// get 获取缓存
func (that *CacheDb) get(key string) interface{} {
tableName := that.getTableName()
cached := that.Db.Get(tableName, "*", Map{"key": key})
if cached != nil {
// 使用字符串比较判断过期ISO 格式天然支持)
endTime := cached.GetString("end_time")
nowTime := Time2Str(time.Now())
if endTime != "" && endTime <= nowTime {
// 惰性删除:过期只返回 nil不立即删除
// 依赖随机清理批量删除过期数据
// 继续检查老表(如果是兼容模式)
} else {
// 直接解析 value不再需要 {"data": value} 包装
valueStr := cached.GetString("value")
if valueStr != "" {
var data interface{}
err := json.Unmarshal([]byte(valueStr), &data)
if err == nil {
return data
}
}
}
}
// 兼容模式:新表没有数据时,回退读取老表
if that.isCompatibleMode() {
return that.getLegacy(key)
}
return nil
}
// set 设置缓存
func (that *CacheDb) set(key string, value interface{}, endTime time.Time) {
// 直接序列化 value不再包装
@ -374,6 +479,11 @@ func (that *CacheDb) set(key string, value interface{}, endTime time.Time) {
// 写入历史记录
that.writeHistory(key)
// 兼容模式:写新表后删除老表同 key 记录(加速老数据消亡)
if that.isCompatibleMode() {
that.deleteLegacy(key)
}
// 随机执行删除过期数据命令5% 概率)
if Rand(1000) > 950 {
nowTimeStr := Time2Str(time.Now())
@ -387,11 +497,16 @@ func (that *CacheDb) delete(key string) {
del := strings.Index(key, "*")
// 如果通配删除
if del != -1 {
key = Substr(key, 0, del)
that.Db.Delete(tableName, Map{"key[~]": key + "%"})
keyPrefix := Substr(key, 0, del)
that.Db.Delete(tableName, Map{"key[~]": keyPrefix + "%"})
} else {
that.Db.Delete(tableName, Map{"key": key})
}
// 兼容模式:同时删除老表中的 key避免回退读到老数据
if that.isCompatibleMode() {
that.deleteLegacy(key)
}
}
// Cache 缓存操作入口
@ -424,9 +539,9 @@ func (that *CacheDb) Cache(key string, data ...interface{}) *Obj {
}
} else if len(data) >= 2 {
// 使用指定的超时时间
that.SetError(nil)
tempTimeout := ObjToInt64(data[1], that.Error)
if that.GetError() == nil && tempTimeout > 0 {
var err Error
tempTimeout := ObjToInt64(data[1], &err)
if err.GetError() == nil && tempTimeout > 0 {
timeout = tempTimeout
} else {
timeout = that.TimeOut
@ -440,3 +555,139 @@ func (that *CacheDb) Cache(key string, data ...interface{}) *Obj {
that.set(key, data[0], endTime)
return &Obj{Data: nil}
}
// CachesGet 批量获取缓存(使用 IN 查询优化)
// 返回 Mapkey 为缓存键value 为缓存值(不存在或过期的 key 不包含在结果中)
func (that *CacheDb) CachesGet(keys []string) Map {
that.initDbTable()
result := make(Map, len(keys))
if len(keys) == 0 {
return result
}
// #region agent log
debugLogDb("E", "cache_db.go:CachesGet:start", "CacheDb.CachesGet开始", map[string]interface{}{
"keys_count": len(keys),
"keys": keys,
})
// #endregion
tableName := that.getTableName()
nowTime := Time2Str(time.Now())
// 使用 IN 查询批量获取
cachedList := that.Db.Select(tableName, "*", Map{
"key": keys,
"end_time[>]": nowTime,
})
// #region agent log
debugLogDb("E", "cache_db.go:CachesGet:afterSelect", "DB Select完成", map[string]interface{}{
"table": tableName,
"now_time": nowTime,
"found_rows": len(cachedList),
})
// #endregion
for _, cached := range cachedList {
valueStr := cached.GetString("value")
if valueStr != "" {
var data interface{}
err := json.Unmarshal([]byte(valueStr), &data)
if err == nil {
result[cached.GetString("key")] = data
} else {
// #region agent log
debugLogDb("E", "cache_db.go:CachesGet:unmarshalError", "JSON解析失败", map[string]interface{}{
"key": cached.GetString("key"),
"error": err.Error(),
})
// #endregion
}
}
}
// 兼容模式:新表没有的 key回退读取老表
if that.isCompatibleMode() {
// #region agent log
debugLogDb("E", "cache_db.go:CachesGet:compatMode", "兼容模式检查老表", nil)
// #endregion
for _, key := range keys {
if _, exists := result[key]; !exists {
legacyData := that.getLegacy(key)
if legacyData != nil {
result[key] = legacyData
}
}
}
}
// #region agent log
debugLogDb("E", "cache_db.go:CachesGet:end", "CacheDb.CachesGet完成", map[string]interface{}{
"result_count": len(result),
})
// #endregion
return result
}
// CachesSet 批量设置缓存
// data: Mapkey 为缓存键value 为缓存值
// timeout: 可选过期时间(秒),不传则使用默认超时时间
func (that *CacheDb) CachesSet(data Map, timeout ...int64) {
that.initDbTable()
if len(data) == 0 {
return
}
// #region agent log
debugLogDb("A", "cache_db.go:CachesSet:start", "CacheDb.CachesSet开始", map[string]interface{}{
"data_count": len(data),
})
// #endregion
// 计算过期时间
var tim int64
if len(timeout) > 0 && timeout[0] > 0 {
tim = timeout[0]
} else {
tim = that.TimeOut
if tim == 0 {
tim = DefaultCacheTimeout
}
}
endTime := time.Now().Add(time.Duration(tim) * time.Second)
// 逐个设置(保持事务一致性和历史记录)
for key, value := range data {
that.set(key, value, endTime)
}
// #region agent log
debugLogDb("A", "cache_db.go:CachesSet:end", "CacheDb.CachesSet完成", map[string]interface{}{
"data_count": len(data),
"end_time": Time2Str(endTime),
})
// #endregion
}
// CachesDelete 批量删除缓存
func (that *CacheDb) CachesDelete(keys []string) {
that.initDbTable()
if len(keys) == 0 {
return
}
tableName := that.getTableName()
// 使用 IN 条件批量删除
that.Db.Delete(tableName, Map{"key": keys})
// 兼容模式:同时删除老表中的 keys
if that.isCompatibleMode() {
legacyTableName := that.getLegacyTableName()
if that.tableExists(legacyTableName) {
that.Db.Delete(legacyTableName, Map{"key": keys})
}
}
}

39
cache/cache_memory.go vendored
View File

@ -113,3 +113,42 @@ func (c *CacheMemory) Cache(key string, data ...interface{}) *Obj {
c.set(key, data[0], expireAt)
return nil
}
// CachesGet 批量获取缓存
// 返回 Mapkey 为缓存键value 为缓存值(不存在或过期的 key 不包含在结果中)
func (c *CacheMemory) CachesGet(keys []string) Map {
result := make(Map, len(keys))
for _, key := range keys {
obj := c.get(key)
if obj != nil && obj.Data != nil {
result[key] = obj.Data
}
}
return result
}
// CachesSet 批量设置缓存
// data: Mapkey 为缓存键value 为缓存值
// timeout: 可选过期时间(秒),不传则使用默认超时时间
func (c *CacheMemory) CachesSet(data Map, timeout ...int64) {
now := time.Now().Unix()
expireAt := now + c.TimeOut
if len(timeout) > 0 && timeout[0] > 0 {
if timeout[0] > now {
expireAt = timeout[0]
} else {
expireAt = now + timeout[0]
}
}
for key, value := range data {
c.set(key, value, expireAt)
}
}
// CachesDelete 批量删除缓存
func (c *CacheMemory) CachesDelete(keys []string) {
for _, key := range keys {
c.delete(key)
}
}

96
cache/cache_redis.go vendored
View File

@ -188,3 +188,99 @@ func (that *CacheRedis) Cache(key string, data ...interface{}) *Obj {
return reData
}
// CachesGet 批量获取缓存(使用 Redis MGET 命令优化)
// 返回 Mapkey 为缓存键value 为缓存值(不存在的 key 不包含在结果中)
func (that *CacheRedis) CachesGet(keys []string) Map {
result := make(Map, len(keys))
if len(keys) == 0 {
return result
}
conn := that.getConn()
if conn == nil {
return result
}
defer conn.Close()
// 构建 MGET 参数
args := make([]interface{}, len(keys))
for i, key := range keys {
args[i] = key
}
values, err := redis.Strings(conn.Do("MGET", args...))
if err != nil {
if !strings.Contains(err.Error(), "nil returned") {
that.Error.SetError(err)
}
return result
}
// 将结果映射回 Map
for i, value := range values {
if value != "" {
result[keys[i]] = value
}
}
return result
}
// CachesSet 批量设置缓存(使用 Redis pipeline 优化)
// data: Mapkey 为缓存键value 为缓存值
// timeout: 可选过期时间(秒),不传则使用默认超时时间
func (that *CacheRedis) CachesSet(data Map, timeout ...int64) {
if len(data) == 0 {
return
}
conn := that.getConn()
if conn == nil {
return
}
defer conn.Close()
tim := that.TimeOut
if len(timeout) > 0 && timeout[0] > 0 {
if timeout[0] > tim {
tim = timeout[0]
} else {
tim = tim + timeout[0]
}
}
// 使用 pipeline 批量设置
conn.Send("MULTI")
for key, value := range data {
conn.Send("SET", key, ObjToStr(value), "EX", ObjToStr(tim))
}
_, err := conn.Do("EXEC")
if err != nil {
that.Error.SetError(err)
}
}
// CachesDelete 批量删除缓存(使用 Redis DEL 命令批量删除)
func (that *CacheRedis) CachesDelete(keys []string) {
if len(keys) == 0 {
return
}
conn := that.getConn()
if conn == nil {
return
}
defer conn.Close()
// 构建 DEL 参数
args := make([]interface{}, len(keys))
for i, key := range keys {
args[i] = key
}
_, err := conn.Do("DEL", args...)
if err != nil {
that.Error.SetError(err)
}
}

4
cache/type.go vendored
View File

@ -11,6 +11,10 @@ type CacheIns interface {
GetError() *Error
SetError(err *Error)
Cache(key string, data ...interface{}) *Obj
// 批量操作
CachesGet(keys []string) Map // 批量获取
CachesSet(data Map, timeout ...int64) // 批量设置
CachesDelete(keys []string) // 批量删除
}
// 单条缓存数据

View File

@ -3,6 +3,7 @@ package hotime
import (
"bytes"
"encoding/json"
"errors"
"io"
"mime/multipart"
"net/http"
@ -59,6 +60,9 @@ func (that *Context) Display(statu int, data interface{}) {
resp["result"] = temp
//兼容android等需要json转对象的服务
resp["error"] = temp
that.Error.SetError(errors.New(resp.ToJsonString()))
} else {
resp["result"] = data
}

View File

@ -83,21 +83,22 @@ func (p *IdentifierProcessor) ProcessTableNameNoPrefix(name string) string {
// ProcessColumn 处理 table.column 格式
// 输入: "name" 或 "order.name" 或 "`order`.name" 或 "`order`.`name`"
// 输出: "`name`" 或 "`app_order`.`name`" (MySQL)
// 输出: "`name`" 或 "app_order.name"
// 注意: 单独的列名加引号避免关键字冲突table.column 格式不加引号
func (p *IdentifierProcessor) ProcessColumn(name string) string {
// 检查是否包含点号
if !strings.Contains(name, ".") {
// 单独的列名,只加引号
// 单独的列名,需要加引号(避免关键字冲突)
return p.dialect.QuoteIdentifier(p.stripQuotes(name))
}
// 处理 table.column 格式
// 处理 table.column 格式,不加引号,只添加前缀
parts := p.splitTableColumn(name)
if len(parts) == 2 {
tableName := p.stripQuotes(parts[0])
columnName := p.stripQuotes(parts[1])
// 表名添加前缀
return p.dialect.QuoteIdentifier(p.prefix+tableName) + "." + p.dialect.QuoteIdentifier(columnName)
// table.column 格式不加反引号
return p.prefix + tableName + "." + columnName
}
// 无法解析,返回原样但转换引号
@ -107,14 +108,16 @@ func (p *IdentifierProcessor) ProcessColumn(name string) string {
// ProcessColumnNoPrefix 处理 table.column 格式(不添加前缀)
func (p *IdentifierProcessor) ProcessColumnNoPrefix(name string) string {
if !strings.Contains(name, ".") {
// 单独的列名,需要加引号(避免关键字冲突)
return p.dialect.QuoteIdentifier(p.stripQuotes(name))
}
// table.column 格式不加引号
parts := p.splitTableColumn(name)
if len(parts) == 2 {
tableName := p.stripQuotes(parts[0])
columnName := p.stripQuotes(parts[1])
return p.dialect.QuoteIdentifier(tableName) + "." + p.dialect.QuoteIdentifier(columnName)
return tableName + "." + columnName
}
return p.convertQuotes(name)
@ -122,7 +125,9 @@ func (p *IdentifierProcessor) ProcessColumnNoPrefix(name string) string {
// ProcessConditionString 智能解析条件字符串(如 ON 条件)
// 输入: "user.id = order.user_id AND order.status = 1"
// 输出: "`app_user`.`id` = `app_order`.`user_id` AND `app_order`.`status` = 1" (MySQL)
// 输出: "app_user.id = app_order.user_id AND app_order.status = 1"
// 注意: table.column 格式不加反引号,因为 MySQL/SQLite/PostgreSQL 都能正确解析
// 这样可以保持返回的列名与原始列名一致,便于聚合函数等场景读取结果
func (p *IdentifierProcessor) ProcessConditionString(condition string) string {
if condition == "" {
return condition
@ -131,20 +136,21 @@ func (p *IdentifierProcessor) ProcessConditionString(condition string) string {
result := condition
// 首先处理已有完整引号的情况 `table`.`column` 或 "table"."column"
// 这些需要先处理,因为它们的格式最明确
// 去除引号,只添加前缀
fullyQuotedPattern := regexp.MustCompile("[`\"]([a-zA-Z_][a-zA-Z0-9_]*)[`\"]\\.[`\"]([a-zA-Z_][a-zA-Z0-9_]*)[`\"]")
result = fullyQuotedPattern.ReplaceAllStringFunc(result, func(match string) string {
parts := fullyQuotedPattern.FindStringSubmatch(match)
if len(parts) == 3 {
tableName := parts[1]
colName := parts[2]
return p.dialect.QuoteIdentifier(p.prefix+tableName) + "." + p.dialect.QuoteIdentifier(colName)
// table.column 格式不加反引号,只添加前缀
return p.prefix + tableName + "." + colName
}
return match
})
// 然后处理部分引号的情况 `table`.column 或 "table".column
// 注意:需要避免匹配已处理的内容(已经是双引号包裹的)
// 去除引号,只添加前缀
quotedTablePattern := regexp.MustCompile("[`\"]([a-zA-Z_][a-zA-Z0-9_]*)[`\"]\\.([a-zA-Z_][a-zA-Z0-9_]*)(?:[^`\"]|$)")
result = quotedTablePattern.ReplaceAllStringFunc(result, func(match string) string {
parts := quotedTablePattern.FindStringSubmatch(match)
@ -159,7 +165,8 @@ func (p *IdentifierProcessor) ProcessConditionString(condition string) string {
suffix = string(lastChar)
}
}
return p.dialect.QuoteIdentifier(p.prefix+tableName) + "." + p.dialect.QuoteIdentifier(colName) + suffix
// table.column 格式不加反引号,只添加前缀
return p.prefix + tableName + "." + colName + suffix
}
return match
})
@ -174,7 +181,8 @@ func (p *IdentifierProcessor) ProcessConditionString(condition string) string {
tableName := parts[2]
colName := parts[3]
suffix := parts[4] // 后面的边界字符
return prefix + p.dialect.QuoteIdentifier(p.prefix+tableName) + "." + p.dialect.QuoteIdentifier(colName) + suffix
// table.column 格式不加反引号,只添加前缀
return prefix + p.prefix + tableName + "." + colName + suffix
}
return match
})

View File

@ -135,7 +135,9 @@ func main() {
"db": {
"db": true,
"session": true,
"timeout": 2592000
"timeout": 2592000,
"history": false,
"mode": "compatible"
}
}
}
@ -143,6 +145,27 @@ func main() {
缓存优先级: **Memory > Redis > DB**,自动穿透与回填
#### DB 缓存配置说明
| 配置项 | 默认值 | 说明 |
|--------|--------|------|
| `db` | false | 是否缓存数据库查询 |
| `session` | true | 是否缓存 Session |
| `timeout` | 2592000 | 过期时间(秒) |
| `history` | false | 是否记录缓存历史,开启后每次新增/修改缓存都会记录到历史表 |
| `mode` | compatible | 缓存表模式,见下表 |
**缓存表模式 (mode)**
| 模式 | 说明 |
|------|------|
| `compatible` | **默认**。兼容模式:写新表,新表无数据时回退读老表;写入时自动删除老表同 key 记录;删除时同时删两表。适合从老版本平滑升级,老数据自然过期消亡 |
| `new` | 只使用新表 `hotime_cache`,启动时自动迁移老表 `cached` 数据。老表保留由人工删除,不再被读写 |
> **升级建议**:从老版本升级时,建议先使用 `compatible` 模式运行一段时间(让老数据自然过期),确认无问题后再切换到 `new` 模式
> 从老版本升级时,建议使用 `compatible` 模式平滑过渡,待老表数据消亡后切换到 `new` 模式
### 错误码配置
```json
@ -342,6 +365,44 @@ data := that.Cache("key") // 获取
that.Cache("key", nil) // 删除
```
### 批量操作(性能优化)
当需要同时操作多个 Session 字段时,使用批量操作可显著提升性能:
```go
// SessionsSet - 批量设置N个字段只触发1次数据库写入
that.SessionsSet(Map{
"user_id": userId,
"username": "张三",
"login_time": time.Now().Unix(),
"role": "admin",
})
// SessionsGet - 批量获取1次调用获取多个字段
result := that.SessionsGet("user_id", "username", "role")
// result = Map{"user_id": 123, "username": "张三", "role": "admin"}
userId := ObjToInt64(result["user_id"], nil)
username := ObjToStr(result["username"])
// SessionsDelete - 批量删除N个字段只触发1次数据库写入
that.SessionsDelete("token", "temp_code", "verify_expire")
```
**性能对比**
| 操作方式 | 设置10个字段 | 数据库写入次数 |
|----------|-------------|---------------|
| 逐个调用 `Session()` | 10次调用 | **10次** |
| 使用 `SessionsSet()` | 1次调用 | **1次** |
| 操作方式 | 获取10个字段 | 缓存查询次数 |
|----------|-------------|--------------|
| 逐个调用 `Session()` | 10次调用 | **10次** |
| 使用 `SessionsGet()` | 1次调用 | **1次** |
> 💡 **最佳实践**:当一次性操作 3 个以上字段时,建议使用批量操作
三级缓存自动运作:**Memory → Redis → Database**
## 数据库操作(简要)
@ -439,6 +500,8 @@ that.Log = Map{
package main
import (
"time"
. "code.hoteas.com/golang/hotime"
. "code.hoteas.com/golang/hotime/common"
)
@ -481,14 +544,25 @@ func main() {
return
}
that.Session("user_id", user.GetInt64("id"))
// 使用批量设置,一次写入多个字段
that.SessionsSet(Map{
"user_id": user.GetInt64("id"),
"username": user.GetString("name"),
"login_time": time.Now().Unix(),
})
that.Display(0, Map{"user": user})
},
"info": func(that *Context) {
userId := that.Session("user_id").ToInt64()
// 使用批量获取,一次读取多个字段
sess := that.SessionsGet("user_id", "username", "login_time")
userId := ObjToInt64(sess["user_id"], nil)
user := that.Db.Get("user", "*", Map{"id": userId})
that.Display(0, Map{"user": user})
that.Display(0, Map{
"user": user,
"login_time": sess["login_time"],
})
},
"list": func(that *Context) {
@ -513,7 +587,8 @@ func main() {
},
"logout": func(that *Context) {
that.Session("user_id", nil)
// 使用批量删除,一次清除多个字段
that.SessionsDelete("user_id", "username", "login_time")
that.Display(0, "退出成功")
},
},

View File

@ -0,0 +1,487 @@
package main
import (
"encoding/json"
"fmt"
"os"
"time"
"code.hoteas.com/golang/hotime"
"code.hoteas.com/golang/hotime/cache"
. "code.hoteas.com/golang/hotime/common"
)
const debugLogPath = `d:\work\hotimev1.5\.cursor\debug.log`
// debugLog 写入调试日志
func debugLog(hypothesisId, location, message string, data map[string]interface{}) {
logFile, _ := os.OpenFile(debugLogPath, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if logFile != nil {
logEntry, _ := json.Marshal(map[string]interface{}{
"sessionId": "batch-cache-test",
"runId": "test-run",
"hypothesisId": hypothesisId,
"location": location,
"message": message,
"data": data,
"timestamp": time.Now().UnixMilli(),
})
logFile.Write(append(logEntry, '\n'))
logFile.Close()
}
}
// TestBatchCacheOperations 测试所有批量缓存操作
func TestBatchCacheOperations(app *hotime.Application) {
fmt.Println("\n========== 批量缓存操作测试开始 ==========")
// 测试1: 测试 CacheMemory 批量操作
fmt.Println("\n--- 测试1: CacheMemory 批量操作 ---")
testCacheMemoryBatch(app)
// 测试2: 测试 CacheDb 批量操作
fmt.Println("\n--- 测试2: CacheDb 批量操作 ---")
testCacheDbBatch(app)
// 测试3: 测试 HoTimeCache 三级缓存批量操作
fmt.Println("\n--- 测试3: HoTimeCache 三级缓存批量操作 ---")
testHoTimeCacheBatch(app)
// 测试4: 测试 SessionIns 批量操作
fmt.Println("\n--- 测试4: SessionIns 批量操作 ---")
testSessionInsBatch(app)
// 测试5: 测试缓存反哺机制
fmt.Println("\n--- 测试5: 缓存反哺机制测试 ---")
testCacheBackfill(app)
// 测试6: 测试批量操作效率(一次性写入验证)
fmt.Println("\n--- 测试6: 批量操作效率测试 ---")
testBatchEfficiency(app)
fmt.Println("\n========== 批量缓存操作测试完成 ==========")
}
// testCacheMemoryBatch 测试内存缓存批量操作
func testCacheMemoryBatch(app *hotime.Application) {
// #region agent log
debugLog("E", "batch_cache_test.go:testCacheMemoryBatch:start", "开始测试CacheMemory批量操作", nil)
// #endregion
memCache := &cache.CacheMemory{TimeOut: 3600, DbSet: true, SessionSet: true}
memCache.SetError(&Error{})
// 测试 CachesSet
testData := Map{
"mem_key1": "value1",
"mem_key2": "value2",
"mem_key3": "value3",
}
memCache.CachesSet(testData)
// #region agent log
debugLog("E", "batch_cache_test.go:testCacheMemoryBatch:afterSet", "CacheMemory.CachesSet完成", map[string]interface{}{"count": len(testData)})
// #endregion
// 测试 CachesGet
keys := []string{"mem_key1", "mem_key2", "mem_key3", "mem_key_not_exist"}
result := memCache.CachesGet(keys)
// #region agent log
debugLog("E", "batch_cache_test.go:testCacheMemoryBatch:afterGet", "CacheMemory.CachesGet完成", map[string]interface{}{
"requested_keys": keys,
"result_count": len(result),
"result_keys": getMapKeys(result),
})
// #endregion
if len(result) != 3 {
fmt.Printf(" [FAIL] CacheMemory.CachesGet: 期望3个结果实际%d个\n", len(result))
} else {
fmt.Println(" [PASS] CacheMemory.CachesGet: 批量获取正确")
}
// 测试 CachesDelete
memCache.CachesDelete([]string{"mem_key1", "mem_key2"})
result2 := memCache.CachesGet(keys)
// #region agent log
debugLog("E", "batch_cache_test.go:testCacheMemoryBatch:afterDelete", "CacheMemory.CachesDelete完成", map[string]interface{}{
"deleted_keys": []string{"mem_key1", "mem_key2"},
"remaining_count": len(result2),
"remaining_keys": getMapKeys(result2),
})
// #endregion
if len(result2) != 1 || result2["mem_key3"] == nil {
fmt.Printf(" [FAIL] CacheMemory.CachesDelete: 删除后期望1个结果实际%d个\n", len(result2))
} else {
fmt.Println(" [PASS] CacheMemory.CachesDelete: 批量删除正确")
}
}
// testCacheDbBatch 测试数据库缓存批量操作
func testCacheDbBatch(app *hotime.Application) {
// #region agent log
debugLog("E", "batch_cache_test.go:testCacheDbBatch:start", "开始测试CacheDb批量操作", nil)
// #endregion
// 使用应用的数据库连接
dbCache := &cache.CacheDb{
TimeOut: 3600,
DbSet: true,
SessionSet: true,
Mode: cache.CacheModeNew,
Db: &app.Db,
}
dbCache.SetError(&Error{})
// 清理测试数据
dbCache.CachesDelete([]string{"db_batch_key1", "db_batch_key2", "db_batch_key3"})
// 测试 CachesSet
testData := Map{
"db_batch_key1": "db_value1",
"db_batch_key2": "db_value2",
"db_batch_key3": "db_value3",
}
dbCache.CachesSet(testData)
// #region agent log
debugLog("E", "batch_cache_test.go:testCacheDbBatch:afterSet", "CacheDb.CachesSet完成", map[string]interface{}{"count": len(testData)})
// #endregion
// 测试 CachesGet
keys := []string{"db_batch_key1", "db_batch_key2", "db_batch_key3", "db_not_exist"}
result := dbCache.CachesGet(keys)
// #region agent log
debugLog("E", "batch_cache_test.go:testCacheDbBatch:afterGet", "CacheDb.CachesGet完成", map[string]interface{}{
"requested_keys": keys,
"result_count": len(result),
"result_keys": getMapKeys(result),
"result_values": result,
})
// #endregion
if len(result) != 3 {
fmt.Printf(" [FAIL] CacheDb.CachesGet: 期望3个结果实际%d个\n", len(result))
} else {
fmt.Println(" [PASS] CacheDb.CachesGet: 批量获取正确")
}
// 验证值正确性
if result["db_batch_key1"] != "db_value1" {
fmt.Printf(" [FAIL] CacheDb.CachesGet: db_batch_key1 值不正确,期望 db_value1实际 %v\n", result["db_batch_key1"])
} else {
fmt.Println(" [PASS] CacheDb.CachesGet: 值内容正确")
}
// 测试 CachesDelete
dbCache.CachesDelete([]string{"db_batch_key1", "db_batch_key2"})
result2 := dbCache.CachesGet(keys)
// #region agent log
debugLog("E", "batch_cache_test.go:testCacheDbBatch:afterDelete", "CacheDb.CachesDelete完成", map[string]interface{}{
"deleted_keys": []string{"db_batch_key1", "db_batch_key2"},
"remaining_count": len(result2),
"remaining_keys": getMapKeys(result2),
})
// #endregion
if len(result2) != 1 {
fmt.Printf(" [FAIL] CacheDb.CachesDelete: 删除后期望1个结果实际%d个\n", len(result2))
} else {
fmt.Println(" [PASS] CacheDb.CachesDelete: 批量删除正确")
}
// 清理
dbCache.CachesDelete([]string{"db_batch_key3"})
}
// testHoTimeCacheBatch 测试 HoTimeCache 三级缓存批量操作
func testHoTimeCacheBatch(app *hotime.Application) {
// #region agent log
debugLog("D", "batch_cache_test.go:testHoTimeCacheBatch:start", "开始测试HoTimeCache三级缓存批量操作", nil)
// #endregion
htCache := app.HoTimeCache
// 清理测试数据
htCache.SessionsDelete([]string{
hotime.HEAD_SESSION_ADD + "ht_batch_key1",
hotime.HEAD_SESSION_ADD + "ht_batch_key2",
hotime.HEAD_SESSION_ADD + "ht_batch_key3",
})
// 测试 SessionsSet
testData := Map{
hotime.HEAD_SESSION_ADD + "ht_batch_key1": Map{"user": "test1", "role": "admin"},
hotime.HEAD_SESSION_ADD + "ht_batch_key2": Map{"user": "test2", "role": "user"},
hotime.HEAD_SESSION_ADD + "ht_batch_key3": Map{"user": "test3", "role": "guest"},
}
htCache.SessionsSet(testData)
// #region agent log
debugLog("D", "batch_cache_test.go:testHoTimeCacheBatch:afterSet", "HoTimeCache.SessionsSet完成", map[string]interface{}{"count": len(testData)})
// #endregion
// 测试 SessionsGet
keys := []string{
hotime.HEAD_SESSION_ADD + "ht_batch_key1",
hotime.HEAD_SESSION_ADD + "ht_batch_key2",
hotime.HEAD_SESSION_ADD + "ht_batch_key3",
hotime.HEAD_SESSION_ADD + "ht_not_exist",
}
result := htCache.SessionsGet(keys)
// #region agent log
debugLog("D", "batch_cache_test.go:testHoTimeCacheBatch:afterGet", "HoTimeCache.SessionsGet完成", map[string]interface{}{
"requested_keys": len(keys),
"result_count": len(result),
"result_keys": getMapKeys(result),
})
// #endregion
if len(result) != 3 {
fmt.Printf(" [FAIL] HoTimeCache.SessionsGet: 期望3个结果实际%d个\n", len(result))
} else {
fmt.Println(" [PASS] HoTimeCache.SessionsGet: 批量获取正确")
}
// 测试 SessionsDelete
htCache.SessionsDelete([]string{
hotime.HEAD_SESSION_ADD + "ht_batch_key1",
hotime.HEAD_SESSION_ADD + "ht_batch_key2",
})
result2 := htCache.SessionsGet(keys)
// #region agent log
debugLog("D", "batch_cache_test.go:testHoTimeCacheBatch:afterDelete", "HoTimeCache.SessionsDelete完成", map[string]interface{}{
"remaining_count": len(result2),
})
// #endregion
if len(result2) != 1 {
fmt.Printf(" [FAIL] HoTimeCache.SessionsDelete: 删除后期望1个结果实际%d个\n", len(result2))
} else {
fmt.Println(" [PASS] HoTimeCache.SessionsDelete: 批量删除正确")
}
// 清理
htCache.SessionsDelete([]string{hotime.HEAD_SESSION_ADD + "ht_batch_key3"})
}
// testSessionInsBatch 测试 SessionIns 批量操作
func testSessionInsBatch(app *hotime.Application) {
// #region agent log
debugLog("B", "batch_cache_test.go:testSessionInsBatch:start", "开始测试SessionIns批量操作", nil)
// #endregion
// 创建一个模拟的 SessionIns
session := &hotime.SessionIns{
SessionId: "test_batch_session_" + ObjToStr(time.Now().UnixNano()),
}
session.Init(app.HoTimeCache)
// 测试 SessionsSet
testData := Map{
"field1": "value1",
"field2": 123,
"field3": Map{"nested": "data"},
}
session.SessionsSet(testData)
// #region agent log
debugLog("B", "batch_cache_test.go:testSessionInsBatch:afterSet", "SessionIns.SessionsSet完成", map[string]interface{}{
"session_id": session.SessionId,
"count": len(testData),
})
// #endregion
// 测试 SessionsGet
result := session.SessionsGet("field1", "field2", "field3", "not_exist")
// #region agent log
debugLog("B", "batch_cache_test.go:testSessionInsBatch:afterGet", "SessionIns.SessionsGet完成", map[string]interface{}{
"result_count": len(result),
"result_keys": getMapKeys(result),
"result": result,
})
// #endregion
if len(result) != 3 {
fmt.Printf(" [FAIL] SessionIns.SessionsGet: 期望3个结果实际%d个\n", len(result))
} else {
fmt.Println(" [PASS] SessionIns.SessionsGet: 批量获取正确")
}
// 验证值类型
if result["field1"] != "value1" {
fmt.Printf(" [FAIL] SessionIns.SessionsGet: field1 值不正确\n")
} else {
fmt.Println(" [PASS] SessionIns.SessionsGet: 字符串值正确")
}
var convErr Error
if ObjToInt(result["field2"], &convErr) != 123 {
fmt.Printf(" [FAIL] SessionIns.SessionsGet: field2 值不正确\n")
} else {
fmt.Println(" [PASS] SessionIns.SessionsGet: 数值类型正确")
}
// 测试 SessionsDelete
session.SessionsDelete("field1", "field2")
result2 := session.SessionsGet("field1", "field2", "field3")
// #region agent log
debugLog("B", "batch_cache_test.go:testSessionInsBatch:afterDelete", "SessionIns.SessionsDelete完成", map[string]interface{}{
"remaining_count": len(result2),
"remaining_keys": getMapKeys(result2),
})
// #endregion
if len(result2) != 1 {
fmt.Printf(" [FAIL] SessionIns.SessionsDelete: 删除后期望1个结果实际%d个\n", len(result2))
} else {
fmt.Println(" [PASS] SessionIns.SessionsDelete: 批量删除正确")
}
}
// testCacheBackfill 测试缓存反哺机制
func testCacheBackfill(app *hotime.Application) {
// #region agent log
debugLog("D", "batch_cache_test.go:testCacheBackfill:start", "开始测试缓存反哺机制", nil)
// #endregion
htCache := app.HoTimeCache
// 直接写入数据库缓存(绕过 memory模拟只有 db 有数据的情况
dbCache := &cache.CacheDb{
TimeOut: 3600,
DbSet: true,
SessionSet: true,
Mode: cache.CacheModeNew,
Db: &app.Db,
}
dbCache.SetError(&Error{})
testKey := "backfill_test_key_" + ObjToStr(time.Now().UnixNano())
testValue := Map{"backfill": "test_data"}
// 直接写入 db
dbCache.Cache(testKey, testValue)
// #region agent log
debugLog("D", "batch_cache_test.go:testCacheBackfill:dbWritten", "数据直接写入DB", map[string]interface{}{
"key": testKey,
"value": testValue,
})
// #endregion
// 通过 HoTimeCache 批量获取,应该触发反哺到 memory
keys := []string{testKey}
result := htCache.CachesGet(keys)
// #region agent log
debugLog("D", "batch_cache_test.go:testCacheBackfill:afterGet", "HoTimeCache.CachesGet完成", map[string]interface{}{
"result_count": len(result),
"has_key": result[testKey] != nil,
})
// #endregion
if len(result) != 1 || result[testKey] == nil {
fmt.Println(" [FAIL] 缓存反哺: 从 DB 读取失败")
} else {
fmt.Println(" [PASS] 缓存反哺: 从 DB 读取成功")
}
// 清理
htCache.CachesDelete(keys)
}
// testBatchEfficiency 测试批量操作效率
func testBatchEfficiency(app *hotime.Application) {
// #region agent log
debugLog("A", "batch_cache_test.go:testBatchEfficiency:start", "开始测试批量操作效率", nil)
// #endregion
session := &hotime.SessionIns{
SessionId: "efficiency_test_" + ObjToStr(time.Now().UnixNano()),
}
session.Init(app.HoTimeCache)
// 记录批量设置开始时间
startTime := time.Now()
// 设置10个字段
testData := Map{}
for i := 0; i < 10; i++ {
testData[fmt.Sprintf("eff_field_%d", i)] = fmt.Sprintf("value_%d", i)
}
session.SessionsSet(testData)
batchDuration := time.Since(startTime)
// #region agent log
debugLog("A", "batch_cache_test.go:testBatchEfficiency:batchSet", "批量设置完成", map[string]interface{}{
"count": len(testData),
"duration_ms": batchDuration.Milliseconds(),
})
// #endregion
// 对比单个设置
session2 := &hotime.SessionIns{
SessionId: "efficiency_test_single_" + ObjToStr(time.Now().UnixNano()),
}
session2.Init(app.HoTimeCache)
startTime2 := time.Now()
for i := 0; i < 10; i++ {
session2.Session(fmt.Sprintf("single_field_%d", i), fmt.Sprintf("value_%d", i))
}
singleDuration := time.Since(startTime2)
// #region agent log
debugLog("A", "batch_cache_test.go:testBatchEfficiency:singleSet", "单个设置完成", map[string]interface{}{
"count": 10,
"duration_ms": singleDuration.Milliseconds(),
})
// #endregion
fmt.Printf(" 批量设置10个字段耗时: %v\n", batchDuration)
fmt.Printf(" 单个设置10个字段耗时: %v\n", singleDuration)
if batchDuration < singleDuration {
fmt.Println(" [PASS] 批量操作效率: 批量操作更快")
} else {
fmt.Println(" [WARN] 批量操作效率: 批量操作未体现优势(可能数据量太小)")
}
// 批量获取测试
startTime3 := time.Now()
keys := make([]string, 10)
for i := 0; i < 10; i++ {
keys[i] = fmt.Sprintf("eff_field_%d", i)
}
session.SessionsGet(keys...)
batchGetDuration := time.Since(startTime3)
// #region agent log
debugLog("A", "batch_cache_test.go:testBatchEfficiency:batchGet", "批量获取完成", map[string]interface{}{
"count": 10,
"duration_ms": batchGetDuration.Milliseconds(),
})
// #endregion
fmt.Printf(" 批量获取10个字段耗时: %v\n", batchGetDuration)
}
// getMapKeys 获取 Map 的所有键
func getMapKeys(m Map) []string {
keys := make([]string, 0, len(m))
for k := range m {
keys = append(keys, k)
}
return keys
}

View File

@ -1,10 +1,12 @@
package main
import (
. "code.hoteas.com/golang/hotime"
"encoding/json"
"fmt"
"time"
. "code.hoteas.com/golang/hotime"
. "code.hoteas.com/golang/hotime/common"
. "code.hoteas.com/golang/hotime/db"
)
@ -18,6 +20,9 @@ func main() {
appIns.Run(Router{
"app": {
"test": {
"test": func(that *Context) {
that.Display(2, "dsadasd")
},
// 测试入口 - 运行所有测试
"all": func(that *Context) {
results := Map{}
@ -90,6 +95,16 @@ func main() {
"upsert": func(that *Context) { that.Display(0, testUpsert(that)) },
"transaction": func(that *Context) { that.Display(0, testTransaction(that)) },
"rawsql": func(that *Context) { that.Display(0, testRawSQL(that)) },
// ==================== 缓存测试 ====================
// 缓存全部测试
"cache": func(that *Context) { that.Display(0, testCacheAll(that)) },
"cache-compat": func(that *Context) { that.Display(0, testCacheCompatible(that)) },
// 批量缓存操作测试
"cache-batch": func(that *Context) {
//TestBatchCacheOperations(that.Application)
that.Display(0, Map{"message": "批量缓存测试完成,请查看控制台输出和日志文件"})
},
},
},
})
@ -549,32 +564,36 @@ func testAggregate(that *Context) Map {
test2["count"] = count2
tests = append(tests, test2)
// 5.3 Sum 求和
test3 := Map{"name": "Sum 求和"}
// 5.3 Sum 求和 - 单字段名
test3 := Map{"name": "Sum 求和 (单字段名)"}
sum3 := that.Db.Sum("article", "click_num", Map{"state": 0})
test3["result"] = sum3 >= 0
test3["sum"] = sum3
test3["lastQuery"] = that.Db.LastQuery
tests = append(tests, test3)
// 5.4 Avg 平均值
test4 := Map{"name": "Avg 平均值"}
// 5.4 Avg 平均值 - 单字段名
test4 := Map{"name": "Avg 平均值 (单字段名)"}
avg4 := that.Db.Avg("article", "click_num", Map{"state": 0})
test4["result"] = avg4 >= 0
test4["avg"] = avg4
test4["lastQuery"] = that.Db.LastQuery
tests = append(tests, test4)
// 5.5 Max 最大值
test5 := Map{"name": "Max 最大值"}
// 5.5 Max 最大值 - 单字段名
test5 := Map{"name": "Max 最大值 (单字段名)"}
max5 := that.Db.Max("article", "click_num", Map{"state": 0})
test5["result"] = max5 >= 0
test5["max"] = max5
test5["lastQuery"] = that.Db.LastQuery
tests = append(tests, test5)
// 5.6 Min 最小值
test6 := Map{"name": "Min 最小值"}
// 5.6 Min 最小值 - 单字段名
test6 := Map{"name": "Min 最小值 (单字段名)"}
min6 := that.Db.Min("article", "sort", Map{"state": 0})
test6["result"] = true // sort 可能为 0
test6["min"] = min6
test6["lastQuery"] = that.Db.LastQuery
tests = append(tests, test6)
// 5.7 GROUP BY 分组统计
@ -591,6 +610,117 @@ func testAggregate(that *Context) Map {
test7["stats"] = stats7
tests = append(tests, test7)
// ==================== 新增table.column 格式测试 ====================
// 5.8 Sum 求和 - table.column 格式(修复验证)
test8 := Map{"name": "Sum 求和 (table.column 格式)"}
sum8 := that.Db.Sum("article", "article.click_num", Map{"state": 0})
test8["result"] = sum8 >= 0
test8["sum"] = sum8
test8["expected"] = "与单字段名 Sum 结果相同"
test8["match_single_field"] = sum8 == sum3 // 应该与 test3 结果相同
test8["lastQuery"] = that.Db.LastQuery
tests = append(tests, test8)
// 5.9 Avg 平均值 - table.column 格式
test9 := Map{"name": "Avg 平均值 (table.column 格式)"}
avg9 := that.Db.Avg("article", "article.click_num", Map{"state": 0})
test9["result"] = avg9 >= 0
test9["avg"] = avg9
test9["match_single_field"] = avg9 == avg4
test9["lastQuery"] = that.Db.LastQuery
tests = append(tests, test9)
// 5.10 Max 最大值 - table.column 格式
test10 := Map{"name": "Max 最大值 (table.column 格式)"}
max10 := that.Db.Max("article", "article.click_num", Map{"state": 0})
test10["result"] = max10 >= 0
test10["max"] = max10
test10["match_single_field"] = max10 == max5
test10["lastQuery"] = that.Db.LastQuery
tests = append(tests, test10)
// 5.11 Min 最小值 - table.column 格式
test11 := Map{"name": "Min 最小值 (table.column 格式)"}
min11 := that.Db.Min("article", "article.sort", Map{"state": 0})
test11["result"] = true
test11["min"] = min11
test11["match_single_field"] = min11 == min6
test11["lastQuery"] = that.Db.LastQuery
tests = append(tests, test11)
// ==================== 带 JOIN 的聚合函数测试 ====================
// 5.12 Sum 带 JOIN - table.column 格式
test12 := Map{"name": "Sum 带 JOIN (table.column 格式)"}
joinSlice := Slice{
Map{"[>]ctg": "article.ctg_id = ctg.id"},
}
sum12 := that.Db.Sum("article", "article.click_num", joinSlice, Map{"article.state": 0})
test12["result"] = sum12 >= 0
test12["sum"] = sum12
test12["lastQuery"] = that.Db.LastQuery
tests = append(tests, test12)
// 5.13 Count 带 JOIN
test13 := Map{"name": "Count 带 JOIN"}
count13 := that.Db.Count("article", joinSlice, Map{"article.state": 0})
test13["result"] = count13 >= 0
test13["count"] = count13
test13["lastQuery"] = that.Db.LastQuery
tests = append(tests, test13)
// ==================== Select 方法验证 table.column 格式 ====================
// 5.14 Select 使用 table.column 选择字段
test14 := Map{"name": "Select table.column 字段选择"}
articles14 := that.Db.Select("article",
"article.id, article.title, article.click_num",
Map{"article.state": 0, "LIMIT": 3})
test14["result"] = len(articles14) >= 0
test14["count"] = len(articles14)
// 验证返回的 Map 中字段名是否正确(不带反引号)
if len(articles14) > 0 {
keys := []string{}
for k := range articles14[0] {
keys = append(keys, k)
}
test14["returned_keys"] = keys
// 检查是否能正确读取值(字段名是 article.id 还是 id
test14["sample_data"] = articles14[0]
}
test14["lastQuery"] = that.Db.LastQuery
tests = append(tests, test14)
// 5.15 Select 带 JOIN 使用 table.column
test15 := Map{"name": "Select 带 JOIN table.column"}
articles15 := that.Db.Select("article",
joinSlice,
"article.id, article.title, ctg.name as ctg_name",
Map{"article.state": 0, "LIMIT": 3})
test15["result"] = len(articles15) >= 0
test15["count"] = len(articles15)
if len(articles15) > 0 {
keys := []string{}
for k := range articles15[0] {
keys = append(keys, k)
}
test15["returned_keys"] = keys
test15["sample_data"] = articles15[0]
}
test15["lastQuery"] = that.Db.LastQuery
tests = append(tests, test15)
// ==================== 聚合函数结果对比验证 ====================
// 5.16 验证 table.column 与单字段结果一致性
test16 := Map{"name": "聚合函数一致性验证"}
allMatch := (sum8 == sum3) && (avg9 == avg4) && (max10 == max5) && (min11 == min6)
test16["result"] = allMatch
test16["sum_match"] = sum8 == sum3
test16["avg_match"] = avg9 == avg4
test16["max_match"] = max10 == max5
test16["min_match"] = min11 == min6
test16["summary"] = fmt.Sprintf("Sum: %v=%v, Avg: %v=%v, Max: %v=%v, Min: %v=%v",
sum3, sum8, avg4, avg9, max5, max10, min6, min11)
tests = append(tests, test16)
result["tests"] = tests
result["success"] = true
return result
@ -903,3 +1033,394 @@ func testRawSQL(that *Context) Map {
result["success"] = true
return result
}
// ==================== 缓存测试 ====================
func testCacheAll(that *Context) Map {
result := Map{"name": "数据库缓存测试", "tests": Slice{}}
tests := Slice{}
// 获取当前缓存模式
cacheMode := "unknown"
if that.Application.HoTimeCache != nil && that.Application.HoTimeCache.Config != nil {
dbConfig := that.Application.HoTimeCache.Config.GetMap("db")
if dbConfig != nil {
cacheMode = dbConfig.GetString("mode")
if cacheMode == "" {
cacheMode = "new"
}
}
}
result["cache_mode"] = cacheMode
// 测试用的唯一前缀
testPrefix := fmt.Sprintf("cache_test_%d_", time.Now().UnixNano())
// ==================== 1. 基础读写测试 ====================
test1 := Map{"name": "1. 基础 set/get 测试"}
testKey1 := testPrefix + "basic"
testValue1 := Map{"name": "测试数据", "count": 123, "active": true}
// 设置缓存
that.Application.Cache(testKey1, testValue1)
// 读取缓存
cached1 := that.Application.Cache(testKey1)
if cached1.Data != nil {
cachedMap := cached1.ToMap()
test1["result"] = cachedMap.GetString("name") == "测试数据" && cachedMap.GetInt("count") == 123
test1["cached_value"] = cachedMap
} else {
test1["result"] = false
test1["error"] = "缓存读取返回 nil"
}
tests = append(tests, test1)
// ==================== 2. 删除缓存测试 ====================
test2 := Map{"name": "2. delete 删除缓存测试"}
testKey2 := testPrefix + "delete"
that.Application.Cache(testKey2, "删除测试值")
// 验证存在
before := that.Application.Cache(testKey2)
beforeExists := before.Data != nil
// 删除
that.Application.Cache(testKey2, nil)
// 验证已删除
after := that.Application.Cache(testKey2)
afterExists := after.Data != nil
test2["result"] = beforeExists && !afterExists
test2["before_exists"] = beforeExists
test2["after_exists"] = afterExists
tests = append(tests, test2)
// ==================== 3. 过期时间测试 ====================
test3 := Map{"name": "3. 过期时间测试(短超时)"}
testKey3 := testPrefix + "expire"
// 设置 2 秒过期
that.Application.Cache(testKey3, "短期数据", 2)
// 立即读取应该存在
immediate := that.Application.Cache(testKey3)
immediateExists := immediate.Data != nil
test3["result"] = immediateExists
test3["immediate_exists"] = immediateExists
test3["note"] = "设置了2秒过期可等待后再次访问验证过期"
tests = append(tests, test3)
// ==================== 4. 不存在的 key 读取测试 ====================
test4 := Map{"name": "4. 不存在的 key 读取测试"}
nonExistKey := testPrefix + "non_exist_key_" + fmt.Sprintf("%d", time.Now().UnixNano())
nonExist := that.Application.Cache(nonExistKey)
test4["result"] = nonExist.Data == nil
test4["value"] = nonExist.Data
tests = append(tests, test4)
// ==================== 5. 重复 set 同一个 key 测试 ====================
test5 := Map{"name": "5. 重复 set 同一个 key 测试"}
testKey5 := testPrefix + "repeat"
that.Application.Cache(testKey5, "第一次值")
first := that.Application.Cache(testKey5).ToStr()
that.Application.Cache(testKey5, "第二次值")
second := that.Application.Cache(testKey5).ToStr()
that.Application.Cache(testKey5, Map{"version": 3})
third := that.Application.Cache(testKey5).ToMap()
test5["result"] = first == "第一次值" && second == "第二次值" && third.GetInt("version") == 3
test5["first"] = first
test5["second"] = second
test5["third"] = third
tests = append(tests, test5)
// ==================== 6. 通配删除测试 ====================
test6 := Map{"name": "6. 通配删除测试 (key*)"}
wildcardPrefix := testPrefix + "wildcard_"
// 创建多个带相同前缀的缓存
that.Application.Cache(wildcardPrefix+"a", "值A")
that.Application.Cache(wildcardPrefix+"b", "值B")
that.Application.Cache(wildcardPrefix+"c", "值C")
// 验证都存在
aExists := that.Application.Cache(wildcardPrefix+"a").Data != nil
bExists := that.Application.Cache(wildcardPrefix+"b").Data != nil
cExists := that.Application.Cache(wildcardPrefix+"c").Data != nil
allExistBefore := aExists && bExists && cExists
// 通配删除
that.Application.Cache(wildcardPrefix+"*", nil)
// 验证都已删除
aAfter := that.Application.Cache(wildcardPrefix+"a").Data != nil
bAfter := that.Application.Cache(wildcardPrefix+"b").Data != nil
cAfter := that.Application.Cache(wildcardPrefix+"c").Data != nil
allDeletedAfter := !aAfter && !bAfter && !cAfter
test6["result"] = allExistBefore && allDeletedAfter
test6["before"] = Map{"a": aExists, "b": bExists, "c": cExists}
test6["after"] = Map{"a": aAfter, "b": bAfter, "c": cAfter}
tests = append(tests, test6)
// ==================== 7. 不同数据类型测试 ====================
test7 := Map{"name": "7. 不同数据类型存储测试"}
// 字符串
that.Application.Cache(testPrefix+"type_string", "字符串值")
typeString := that.Application.Cache(testPrefix + "type_string").ToStr()
// 整数
that.Application.Cache(testPrefix+"type_int", 12345)
typeInt := that.Application.Cache(testPrefix + "type_int").ToInt()
// 浮点数
that.Application.Cache(testPrefix+"type_float", 3.14159)
typeFloat := that.Application.Cache(testPrefix + "type_float").ToFloat64()
// 布尔值
that.Application.Cache(testPrefix+"type_bool", true)
typeBoolData := that.Application.Cache(testPrefix + "type_bool").Data
typeBool := typeBoolData == true || typeBoolData == "true" || typeBoolData == 1.0
// Map
that.Application.Cache(testPrefix+"type_map", Map{"key": "value", "num": 100})
typeMap := that.Application.Cache(testPrefix + "type_map").ToMap()
// Slice
that.Application.Cache(testPrefix+"type_slice", Slice{1, 2, 3, "four", Map{"five": 5}})
typeSlice := that.Application.Cache(testPrefix + "type_slice").ToSlice()
test7["result"] = typeString == "字符串值" &&
typeInt == 12345 &&
typeFloat > 3.14 && typeFloat < 3.15 &&
typeBool == true &&
typeMap.GetString("key") == "value" &&
len(typeSlice) == 5
test7["string"] = typeString
test7["int"] = typeInt
test7["float"] = typeFloat
test7["bool"] = typeBool
test7["map"] = typeMap
test7["slice"] = typeSlice
tests = append(tests, test7)
// ==================== 8. 自定义超时时间测试 ====================
test8 := Map{"name": "8. 自定义超时时间参数测试"}
testKey8 := testPrefix + "custom_timeout"
// 设置 3600 秒1小时过期
that.Application.Cache(testKey8, "长期数据", 3600)
longTerm := that.Application.Cache(testKey8)
test8["result"] = longTerm.Data != nil
test8["value"] = longTerm.ToStr()
tests = append(tests, test8)
// ==================== 9. 查询缓存表状态 ====================
test9 := Map{"name": "9. 缓存表状态查询"}
// 查询新表记录数
prefix := that.Db.GetPrefix()
newTableName := prefix + "hotime_cache"
legacyTableName := prefix + "cached"
newCount := that.Db.Count(newTableName)
test9["new_table_count"] = newCount
test9["new_table_name"] = newTableName
// 尝试查询老表
legacyCount := int64(-1)
legacyExists := false
legacyData := that.Db.Query("SELECT COUNT(*) as cnt FROM `" + legacyTableName + "`")
if len(legacyData) > 0 {
legacyExists = true
legacyCount = legacyData[0].GetInt64("cnt")
}
test9["legacy_table_exists"] = legacyExists
test9["legacy_table_count"] = legacyCount
test9["legacy_table_name"] = legacyTableName
test9["result"] = newCount >= 0
tests = append(tests, test9)
// ==================== 清理测试数据 ====================
// 删除所有测试创建的缓存
that.Application.Cache(testPrefix+"*", nil)
result["tests"] = tests
result["success"] = true
result["cleanup"] = "已清理所有测试缓存数据"
return result
}
// testCacheCompatible 专门测试兼容模式 - 白盒测试
func testCacheCompatible(that *Context) Map {
result := Map{
"test_name": "兼容模式白盒测试",
"timestamp": time.Now().Format("2006-01-02 15:04:05"),
}
prefix := that.Db.GetPrefix()
newTableName := prefix + "hotime_cache"
legacyTableName := prefix + "cached"
tests := Slice{}
// ==================== 1. 查询当前模式 ====================
test1 := Map{"name": "1. 查询当前缓存模式"}
// 读取配置确认模式
cacheConfig := that.Application.Config.GetMap("cache")
dbConfig := cacheConfig.GetMap("db")
mode := dbConfig.GetString("mode")
if mode == "" {
mode = "默认(compatible)"
}
test1["mode"] = mode
test1["result"] = true
tests = append(tests, test1)
// ==================== 2. 查询老表数据 ====================
test2 := Map{"name": "2. 查询老表cached现有数据"}
legacyData := that.Db.Query("SELECT * FROM `" + legacyTableName + "` LIMIT 5")
test2["legacy_table"] = legacyTableName
test2["count"] = len(legacyData)
test2["data"] = legacyData
test2["result"] = true
tests = append(tests, test2)
// ==================== 3. 查询新表数据 ====================
test3 := Map{"name": "3. 查询新表hotime_cache现有数据"}
newData := that.Db.Query("SELECT * FROM `" + newTableName + "` LIMIT 5")
test3["new_table"] = newTableName
test3["count"] = len(newData)
test3["data"] = newData
test3["result"] = true
tests = append(tests, test3)
// ==================== 4. 测试老表回退读取 ====================
test4 := Map{"name": "4. 测试兼容模式老表回退读取"}
// 插入一条未过期的老表数据进行测试
testKey4 := "test_compat_fallback_" + ObjToStr(time.Now().UnixNano())
testValue4 := Map{"admin_id": 999, "admin_name": "测试老数据"}
testValueJson4, _ := json.Marshal(Map{"data": testValue4})
// 在老表插入未过期数据
that.Db.Insert(legacyTableName, Map{
"key": testKey4,
"value": string(testValueJson4),
"endtime": time.Now().Unix() + 3600, // 1小时后过期
"time": time.Now().UnixNano(),
})
test4["test_key"] = testKey4
// 确保新表没有这个 key
newExists := that.Db.Get(newTableName, "*", Map{"key": testKey4})
test4["key_in_new_table"] = newExists != nil
// 通过缓存 API 读取(应该回退到老表)
cacheValue := that.Application.Cache(testKey4)
test4["cache_api_result"] = cacheValue.Data
// 直接从老表读取确认
legacyValue := that.Db.Get(legacyTableName, "*", Map{"key": testKey4})
if legacyValue != nil {
test4["legacy_db_value"] = legacyValue.GetString("value")
test4["legacy_db_endtime"] = legacyValue.GetInt64("endtime")
test4["legacy_db_endtime_readable"] = time.Unix(legacyValue.GetInt64("endtime"), 0).Format("2006-01-02 15:04:05")
}
// 验证新表没数据但缓存API能读到老表数据
test4["result"] = newExists == nil && cacheValue.Data != nil
tests = append(tests, test4)
// ==================== 5. 测试写新删老 ====================
test5 := Map{"name": "5. 测试兼容模式写新删老"}
testKey5 := "test_compat_write_" + ObjToStr(time.Now().UnixNano())
testValue5 := "兼容模式测试数据"
// 先在老表插入一条数据
that.Db.Insert(legacyTableName, Map{
"key": testKey5,
"value": `{"data":"老表原始数据"}`,
"endtime": time.Now().Unix() + 3600, // 1小时后过期
"time": time.Now().UnixNano(),
})
// 确认老表有数据
legacyBefore := that.Db.Get(legacyTableName, "*", Map{"key": testKey5})
test5["step1_legacy_before"] = legacyBefore != nil
// 通过缓存 API 写入(应该写新表并删老表)
that.Application.Cache(testKey5, testValue5)
// 检查新表
newAfter := that.Db.Get(newTableName, "*", Map{"key": testKey5})
test5["step2_new_after"] = newAfter != nil
if newAfter != nil {
test5["new_value"] = newAfter.GetString("value")
}
// 检查老表(应该被删除)
legacyAfter := that.Db.Get(legacyTableName, "*", Map{"key": testKey5})
test5["step3_legacy_after_deleted"] = legacyAfter == nil
test5["result"] = legacyBefore != nil && newAfter != nil && legacyAfter == nil
tests = append(tests, test5)
// ==================== 6. 测试删除同时删两表 ====================
test6 := Map{"name": "6. 测试兼容模式删除(删除两表)"}
testKey6 := "test_compat_delete_" + ObjToStr(time.Now().UnixNano())
// 在老表插入
that.Db.Insert(legacyTableName, Map{
"key": testKey6,
"value": `{"data":"待删除老数据"}`,
"endtime": time.Now().Unix() + 3600,
"time": time.Now().UnixNano(),
})
// 在新表插入
that.Db.Insert(newTableName, Map{
"key": testKey6,
"value": `"待删除新数据"`,
"end_time": time.Now().Add(time.Hour).Format("2006-01-02 15:04:05"),
"state": 0,
"create_time": time.Now().Format("2006-01-02 15:04:05"),
"modify_time": time.Now().Format("2006-01-02 15:04:05"),
})
// 确认两表都有数据
test6["before_legacy"] = that.Db.Get(legacyTableName, "*", Map{"key": testKey6}) != nil
test6["before_new"] = that.Db.Get(newTableName, "*", Map{"key": testKey6}) != nil
// 通过缓存 API 删除
that.Application.Cache(testKey6, nil)
// 确认两表都被删除
test6["after_legacy_deleted"] = that.Db.Get(legacyTableName, "*", Map{"key": testKey6}) == nil
test6["after_new_deleted"] = that.Db.Get(newTableName, "*", Map{"key": testKey6}) == nil
test6["result"] = test6.GetBool("before_legacy") && test6.GetBool("before_new") &&
test6.GetBool("after_legacy_deleted") && test6.GetBool("after_new_deleted")
tests = append(tests, test6)
// ==================== 7. 清理测试数据 ====================
that.Db.Delete(newTableName, Map{"key[~]": "test_compat_%"})
that.Db.Delete(legacyTableName, Map{"key[~]": "test_compat_%"})
result["tests"] = tests
result["success"] = true
return result
}

View File

@ -95,7 +95,7 @@ func isHoTimeFrameworkFile(file string) bool {
lowerFile := strings.ToLower(file)
if strings.Contains(lowerFile, "hotime") {
// 是 hotime 框架的一部分,检查是否是核心模块
frameworkDirs := []string{"/db/", "/common/", "/code/", "/cache/", "/log/", "/dri/"}
frameworkDirs := []string{"db/", "common/", "code/", "cache/", "log/", "dri/"}
for _, dir := range frameworkDirs {
if strings.Contains(file, dir) {
return true
@ -141,8 +141,13 @@ func isHoTimeFrameworkFile(file string) bool {
// 对caller进行递归查询, 直到找到非框架层产生的第一个调用.
// 遍历调用栈,跳过框架层文件,找到应用层代码
// 使用层数限制确保不会误过滤应用层同名目录
// 返回优先级:应用层代码 > application.go > 其他框架文件
func findCaller(skip int) string {
frameworkCount := 0 // 连续框架层计数
var lastFrameworkFile string
var lastFrameworkLine int
var applicationFile string // 优先记录 application.go 位置
var applicationLine int
// 遍历调用栈,找到第一个非框架文件
for i := 0; i < 20; i++ {
@ -151,8 +156,17 @@ func findCaller(skip int) string {
break
}
if isHoTimeFrameworkFile(file) {
isFramework := isHoTimeFrameworkFile(file)
if isFramework {
frameworkCount++
lastFrameworkFile = file
lastFrameworkLine = line
// 优先记录 application.go 位置HoTime 框架入口)
if strings.Contains(file, "application.go") {
applicationFile = file
applicationLine = line
}
// 层数限制:如果已经跳过太多层,停止跳过
if frameworkCount >= maxFrameworkDepth {
return fmt.Sprintf("%s:%d", file, line)
@ -164,7 +178,18 @@ func findCaller(skip int) string {
return fmt.Sprintf("%s:%d", file, line)
}
// 如果找不到应用层,返回最初的调用者
// 如果找不到应用层,返回最后记录的框架文件位置
// 优先级application.go > 其他框架文件 > 第一个调用者
// 确保不会返回 logrus 或 runtime 等三方组件位置
if applicationFile != "" {
return fmt.Sprintf("%s:%d", applicationFile, applicationLine)
}
if lastFrameworkFile != "" {
return fmt.Sprintf("%s:%d", lastFrameworkFile, lastFrameworkLine)
}
// 最后的回退:返回第一个调用者
file, line := getCaller(skip)
return fmt.Sprintf("%s:%d", file, line)
}

View File

@ -1,9 +1,13 @@
package hotime
import (
"encoding/json"
"os"
"sync"
"time"
. "code.hoteas.com/golang/hotime/cache"
. "code.hoteas.com/golang/hotime/common"
"sync"
)
// session对象
@ -17,6 +21,15 @@ type SessionIns struct {
// set 保存 session 到缓存,必须在锁内调用或传入深拷贝的 map
func (that *SessionIns) setWithCopy() {
// #region agent log
logFile, _ := os.OpenFile(`d:\work\hotimev1.5\.cursor\debug.log`, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if logFile != nil {
logEntry, _ := json.Marshal(map[string]interface{}{"sessionId": "debug-session", "runId": "run1", "hypothesisId": "A", "location": "session.go:setWithCopy", "message": "Session写入数据库触发", "data": map[string]interface{}{"session_id": that.SessionId, "map_size": len(that.Map)}, "timestamp": time.Now().UnixMilli()})
logFile.Write(append(logEntry, '\n'))
logFile.Close()
}
// #endregion
// 深拷贝 Map 防止并发修改
that.mutex.RLock()
copyMap := make(Map, len(that.Map))
@ -36,6 +49,15 @@ func (that *SessionIns) Session(key string, data ...interface{}) *Obj {
that.mutex.Unlock()
if len(data) != 0 {
// #region agent log
logFile, _ := os.OpenFile(`d:\work\hotimev1.5\.cursor\debug.log`, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if logFile != nil {
logEntry, _ := json.Marshal(map[string]interface{}{"sessionId": "debug-session", "runId": "run1", "hypothesisId": "B", "location": "session.go:Session", "message": "Session.Set调用", "data": map[string]interface{}{"key": key, "is_delete": data[0] == nil}, "timestamp": time.Now().UnixMilli()})
logFile.Write(append(logEntry, '\n'))
logFile.Close()
}
// #endregion
that.mutex.Lock()
if data[0] == nil {
delete(that.Map, key)
@ -55,6 +77,103 @@ func (that *SessionIns) Session(key string, data ...interface{}) *Obj {
return result
}
// SessionsSet 批量设置session字段只触发一次数据库写入
// 用法that.SessionsSet(Map{"key1": value1, "key2": value2, ...})
// 性能优化设置N个字段只触发1次数据库写入而非N次
func (that *SessionIns) SessionsSet(data Map) {
if len(data) == 0 {
return
}
// #region agent log
logFile, _ := os.OpenFile(`d:\work\hotimev1.5\.cursor\debug.log`, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if logFile != nil {
keys := make([]string, 0, len(data))
for k := range data {
keys = append(keys, k)
}
logEntry, _ := json.Marshal(map[string]interface{}{"sessionId": "debug-session", "runId": "run1", "hypothesisId": "C", "location": "session.go:SessionsSet", "message": "SessionsSet批量设置", "data": map[string]interface{}{"keys": keys, "count": len(data)}, "timestamp": time.Now().UnixMilli()})
logFile.Write(append(logEntry, '\n'))
logFile.Close()
}
// #endregion
that.mutex.Lock()
if that.Map == nil {
that.getWithoutLock()
}
// 批量设置所有字段
for key, value := range data {
if value == nil {
delete(that.Map, key)
} else {
that.Map[key] = value
}
}
that.mutex.Unlock()
// 只触发一次数据库写入
that.setWithCopy()
}
// SessionsDelete 批量删除session字段只触发一次数据库写入
// 用法that.SessionsDelete("key1", "key2", ...)
func (that *SessionIns) SessionsDelete(keys ...string) {
if len(keys) == 0 {
return
}
// #region agent log
logFile, _ := os.OpenFile(`d:\work\hotimev1.5\.cursor\debug.log`, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if logFile != nil {
logEntry, _ := json.Marshal(map[string]interface{}{"sessionId": "debug-session", "runId": "run1", "hypothesisId": "C", "location": "session.go:SessionsDelete", "message": "SessionsDelete批量删除", "data": map[string]interface{}{"keys": keys, "count": len(keys)}, "timestamp": time.Now().UnixMilli()})
logFile.Write(append(logEntry, '\n'))
logFile.Close()
}
// #endregion
that.mutex.Lock()
if that.Map == nil {
that.getWithoutLock()
}
// 批量删除所有字段
for _, key := range keys {
delete(that.Map, key)
}
that.mutex.Unlock()
// 只触发一次数据库写入
that.setWithCopy()
}
// SessionsGet 批量获取session字段
// 用法result := that.SessionsGet("key1", "key2", ...)
// 返回 Mapkey 为字段名value 为字段值(不存在的 key 不包含在结果中)
func (that *SessionIns) SessionsGet(keys ...string) Map {
if len(keys) == 0 {
return Map{}
}
that.mutex.Lock()
if that.Map == nil {
that.getWithoutLock()
}
that.mutex.Unlock()
result := make(Map, len(keys))
that.mutex.RLock()
for _, key := range keys {
if value, exists := that.Map[key]; exists {
result[key] = value
}
}
that.mutex.RUnlock()
return result
}
// getWithoutLock 内部使用,调用前需要已持有锁
func (that *SessionIns) getWithoutLock() {
that.Map = that.HoTimeCache.Session(HEAD_SESSION_ADD + that.SessionId).ToMap()

1
var.go
View File

@ -108,6 +108,7 @@ var ConfigNote = Map{
"db": "默认false非必须缓存数据库启用后能减少数据库的读写压力",
"session": "默认true非必须缓存web session同时缓存session保持的用户缓存",
"history": "默认false非必须是否开启缓存历史记录开启后每次新增/修改缓存都会记录到历史表,历史表一旦创建不会自动删除",
"mode": "默认compatible非必须缓存表模式。compatible兼容模式写新表新表无数据时回退读老表写入时自动删除老表同key记录删除时同时删两表老数据自然过期消亡new只使用新表hotime_cache自动迁移老表cached数据老表保留由人工删除",
},
"redis": Map{
"host": "默认服务ip127.0.0.1必须如果需要使用redis服务时配置",