Jobs
Interviews
2 Job openings at Calibo
About Calibo

Calibo is a creative technology company focused on providing innovative solutions for enhancing digital experiences through cutting-edge design and technology.

Data Architect

Pune

12 - 16 years

INR 30.0 - 45.0 Lacs P.A.

Remote

Full Time

About the Role: We are looking for a highly skilled Data Engineering Architect with strong Data Engineering pipeline implementation experience to serve as the lead Solution/Technical Architect and Subject Matter Expert for customer experience data solutions across multiple data sources. The ideal candidate will collaborate with the Enterprise Architect and the client IT team to establish and implement strategic initiatives. Responsibilities and Technical Skills : 12+ years of relevant experience in designing and Architecting ETL, ELT, Reverse ETL, Data Management or Data Integration, Data Warehouse, Data Lake, and Data Migration. Must have expertise in building complex ETL pipelines and large Data Processing, Data Quality and Data security Experience in delivering quality work on time with multiple, competing priorities. Excellent troubleshooting and problem-solving skills must be able to consistently identify critical elements, variables and alternatives to develop solutions. Experience in identifying, analyzing and translating business requirements into conceptual, logical and physical data models in complex, multi-application environments. Experience with Agile and Scaled Agile Frameworks. Experience in identifying and documenting data integration issues, and challenges such as duplicate data, non-conformed data, and unclean data. Multiple platform development experience. Strong experience in performance tuning of ETL processes using Data Platforms Must have experience in handling Data formats like Delta Tables, Parquet files, Iceberg etc. Experience in Cloud technologies such as AWS/Azure or Google Cloud. Apache Spark design and development experience using Scala, Java, Python or Data Frames with Resilient Distributed Datasets (RDDs). Development experience in databases like Oracle, AWS Redshift, AWS RDS, Postgres Databricks and/or Snowflake. Hands-on professional work experience with Python is highly desired. Experience in Hadoop ecosystem tools for real-time or batch data ingestion. Strong communication and teamwork skills to interface with development team members, business analysts, and project management. Excellent analytical skills. Identification of data sources, internal and external, and defining a plan for data management as per business data strategy. Collaborating with cross-functional teams for the smooth functioning of the enterprise data system. Managing end-to-end data architecture, from selecting the platform, designing the technical architecture, and developing the application to finally testing and implementing the proposed solution. Planning and execution of big data solutions using Databricks, Big Data, Hadoop, Big Query, Snowflake, MongoDB, DynamoDB, PostgreSQL and SQL Server Hands-on experience in defining and implementing various Machine Learning models for different business needs. Integrating technical functionality, ensuring data accessibility, accuracy, and security. Programming / Scripting Languages like Python / Java / Go, Microservices Machine Learning / AI tools like Scikit-learn / TensorFlow / PyTorch

AI ML Architect

Pune

14 - 22 years

INR 45.0 - 75.0 Lacs P.A.

Remote

Full Time

Architecture design, total solution design from requirements analysis, design and engineering for data ingestion, pipeline, data preparation & orchestration, applying the right ML algorithms on the data stream and predictions. Responsibilities: Defining, designing and delivering ML architecture patterns operable in native and hybrid cloud architectures. Research, analyze, recommend and select technical approaches to address challenging development and data integration problems related to ML Model training and deployment in Enterprise Applications. Perform research activities to identify emerging technologies and trends that may affect the Data Science/ ML life-cycle management in enterprise application portfolio. Implementing the solution using the AI orchestration Requirements: Hands-on programming and architecture capabilities in Python, Java, Minimum 6+ years of Experience in Enterprise applications development (Java, . Net) Experience in implementing and deploying Experience in building Data Pipeline, Data cleaning, Feature Engineering, Feature Store Experience in Data Platforms like Databricks, Snowflake, AWS/Azure/GCP Cloud and Data services Machine Learning solutions (using various models, such as Linear/Logistic Regression, Support Vector Machines, (Deep) Neural Networks, Hidden Markov Models, Conditional Random Fields, Topic Modeling, Game Theory, Mechanism Design, etc. ) Strong hands-on experience with statistical packages and ML libraries (e. g. R, Python scikit learn, Spark MLlib, etc. ) Experience in effective data exploration and visualization (e. g. Excel, Power BI, Tableau, Qlik, etc. ) Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc. ) Hands on experience in RDBMS, NoSQL, big data stores like: Elastic, Cassandra, Hbase, Hive, HDFS Work experience as Solution Architect/Software Architect/Technical Lead roles Experience with open-source software. Excellent problem-solving skills and ability to break down complexity. Ability to see multiple solutions to problems and choose the right one for the situation. Excellent written and oral communication skills. Demonstrated technical expertise around architecting solutions around AI, ML, deep learning and related technologies. Developing AI/ML models in real-world environments and integrating AI/ML using Cloud native or hybrid technologies into large-scale enterprise applications. In-depth experience in AI/ML and Data analytics services offered on Amazon Web Services and/or Microsoft Azure cloud solution and their interdependencies. Specializes in at least one of the AI/ML stack (Frameworks and tools like MxNET and Tensorflow, ML platform such as Amazon SageMaker for data scientists, API-driven AI Services like Amazon Lex, Amazon Polly, Amazon Transcribe, Amazon Comprehend, and Amazon Rekognition to quickly add intelligence to applications with a simple API call). Demonstrated experience developing best practices and recommendations around tools/technologies for ML life-cycle capabilities such as Data collection, Data preparation, Feature Engineering, Model Management, MLOps, Model Deployment approaches and Model monitoring and tuning. Back end: LLM APIs and hosting, both proprietary and open-source solutions, cloud providers, ML infrastructure Orchestration: Workflow management such as LangChain, Llamalndex, HuggingFace, OLLAMA Data Management : LLM cache Monitoring: LLM Ops tool Tools & Techniques: prompt engineering, embedding models, vector DB, validation frameworks, annotation tools, transfer learnings and others Pipelines: Gen AI pipelines and implementation on cloud platforms (preference: Azure data bricks, Docker Container, Nginx, Jenkins)

FIND ON MAP

Calibo

Calibo logo

Calibo

Technology, Design, Digital Marketing

Tech City

50-100 Employees

2 Jobs

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Job Titles Overview