Phonologies (India)

3 Job openings at Phonologies (India)
Lead Data Engineer & Architect Pune,Maharashtra,India 10 years None Not disclosed On-site Full Time

Company Description Phonologies manages telephony infrastructure for contact center applications and chatbots. Our platform helps businesses answer millions of customer support queries using automated voicebots, improving customer interactions and operational efficiencies. Leading pharmacy chains, Fortune 500 companies, and North America's largest carrier rely on our solutions to reduce the burden on live agents and lower costs. Phonologies is based in India and operates globally. Role Description This is a full-time on-site role for a Lead Data Engineer & Architect, located in Pune. The Lead Data Engineer & Architect will be responsible for designing and implementing data architecture, developing and maintaining data pipelines, conducting data analysis, and collaborating with various teams to improve data-driven decision-making. The role also includes leading data engineering projects and ensuring data quality and security. Qualifications 10+ years in enterprise data engineering and architecture Expert in ETL, orchestration, and streaming pipelines Skilled in Hadoop, Spark, Azure, Kafka, Kubernetes Built MLOps and AutoML-ready production pipelines Delivered telecom, banking, and public sector solutions Leads cross-functional teams with strong client focus Certified in data platforms and AI leadership

Lead Data Engineer & Architect pune,maharashtra 10 - 14 years INR Not disclosed On-site Full Time

You will be working as a Lead Data Engineer & Architect at Phonologies, a company that specializes in managing telephony infrastructure for contact center applications and chatbots. Phonologies" platform is utilized by leading pharmacy chains, Fortune 500 companies, and North America's largest carrier to automate voice-based customer support queries, enhancing customer interactions and operational efficiencies. The company is headquartered in India and operates on a global scale. As the Lead Data Engineer & Architect, your role will involve designing and implementing data architecture, creating and maintaining data pipelines, performing data analysis, and collaborating with different teams to enhance data-driven decision-making processes. You will also be tasked with leading data engineering projects, ensuring data quality and security, and leveraging your expertise to drive successful outcomes. To be successful in this role, you should possess at least 10 years of experience in enterprise data engineering and architecture. You must be proficient in ETL processes, orchestration, and streaming pipelines, with a strong skill set in technologies such as Hadoop, Spark, Azure, Kafka, and Kubernetes. Additionally, you should have a track record of building MLOps and AutoML-ready production pipelines and delivering solutions for telecom, banking, and public sector industries. Your ability to lead cross-functional teams with a focus on client satisfaction will be crucial, as well as your certifications in data platforms and AI leadership. If you are passionate about data engineering, architecture, and driving innovation in a dynamic environment, this role at Phonologies may be the perfect opportunity for you. Join our team in Pune and contribute to our mission of revolutionizing customer support through cutting-edge technology solutions.,

Machine Learning Engineer pune,maharashtra,india 5 years None Not disclosed On-site Full Time

About the Role Phonologies is seeking a hands-on ML Engineer who bridges data engineering and machine learning designing and implementing production-ready ML pipelines that are reliable, scalable, and automation-driven. You'll own the end-to-end workflow: from data transformation and pipeline design to model packaging, fine-tuning preparation, and deployment automation. This is not a Data Scientist or MLOps role its data-focused engineering position for someone who understands ML systems deeply, builds robust data workflows, and develops the platforms that power AI in production. Role & responsibilities Machine Learning Pipelines & Automation: Design, deploy and maintain end-to-end ML pipelines. Build data transformation layers (Bronze, Silver, Gold) to enable structured, reusable workflows. Automate retraining, validation, and deployment using Airflow, Kubeflow, or equivalent tools. Implement Rules-Based Access Control (RBAC) and enterprise-grade authorization. LLM & GenAI Readiness: Prepare datasets for LLM fine-tuning - tokenization, formatting, and quality filtering. Support LangChain / RAG integration and automate embeddings preparation for GenAI applications. API & Platform Architecture: Develop robust APIs for model serving, metadata access, and pipeline orchestration. Participate in platform design and architecture reviews, contributing to scalable ML system blueprints. Create monitoring and observability frameworks for performance and reliability. Cloud & Deployment: Deploy pipelines across cloud (AWS, Azure, GCP) and on-prem environments, ensuring scalability and reproducibility. Collaborate with DevOps and platform teams to optimize compute, storage, and workflow orchestration. Collaboration & Integration: Work with Data Scientists to productionize models, and with Data Engineers to optimize feature pipelines. Integrate Firebase workflows or data event triggers where required. Preferred candidate profile Experience: 5+ years in Data Engineering or ML Engineering , with proven experience in: building data workflows and ML pipelines packaging and deploying models using Docker & CI/CD preparing data for LLM fine-tuning and generative AI pipelines designing platform and API architecture for ML systems. Technical Skills: Programming & ML : Python, SQL, scikit-learn, XGBoost, LightGBM Data Engineering & Cloud Pipelines : Large-scale preprocessing, containerized ETL (Docker, Airflow, Kubernetes), workflow automation Data Streaming & Integration : Apache Kafka, micro-batch and real-time ingestion ML Lifecycle & Orchestration : MLFlow, Dagshub, Databricks, A/B Testing, modular ML system design API & Platform Development : FastAPI, Flask, RESTful APIs, architecture planning Data Governance, Privacy, Security & Access Control : Schema registry, lineage tracking, secure data handling, audit logging, RBAC LLM & GenAI : Fine-tuning prep, RAG, LangChain, LLMOps (LangSmith, vector DBs) AutoML & Optimization : PyCaret, H2O.ai, Google AutoML Model Monitoring & Automation : Drift detection, retraining workflows, Airflow / Kubeflow automation Firebase & Tooling : Cloud functions, Firestore models, Jenkins, Prefect, CI/CD automation Education: Bachelors or Masters in Computer Science , Machine Learning , or Information Systems . Communication & Collaboration : Translating technical concepts, business storytelling, cross-functional delivery.