Text this: Edge-LLM Inference With Cost-Aware Layer Allocation and Adaptive Scheduling