You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
⚡ Optimize memory usage with caching and transforms
Implement multiple memory optimization strategies to reduce heap allocations
and RSS memory usage during operator execution:
**OpenAPI Schema Caching:**
- Wrap discovery client with memory.NewMemCacheClient to cache OpenAPI schemas
- Prevents redundant schema fetches from API server
- Applied to both operator-controller and catalogd
**Cache Transform Functions:**
- Strip managed fields from cached objects (can be several KB per object)
- Remove large annotations (kubectl.kubernetes.io/last-applied-configuration)
- Shared transform function in internal/shared/util/cache/transform.go
**Memory Efficiency Improvements:**
- Pre-allocate slices with known capacity to reduce grow operations
- Reduce unnecessary deep copies of large objects
- Optimize JSON deserialization paths
**Impact:**
These optimizations significantly reduce memory overhead, especially for
large-scale deployments with many resources. OpenAPI caching alone reduces
allocations by ~73% (from 13MB to 3.5MB per profiling data).
See MEMORY_ANALYSIS.md for detailed breakdown of memory usage patterns.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
0 commit comments