Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Hi All, We are hiring for Data Engineers, kindly refer the below skillsets: Mandatory Skills: GCP Cloud (especially BigQuery and DataProc) Big Data technologies Hadoop Hive Python / PySpark Airflow and DAG orchestration Preferred Skills: Experience with visualization tools such as Tableau or Power BI Familiarity with Jethro is a plus
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Senior Data Analyst - Project Management Location: Bengaluru, Karnataka, India Experience : 2-3 Years About the Company & Role : We are one of India’s premier integrated political consulting firms specializing in building data-driven 360-degree election campaigns. We help our clients with strategic advice and implementation which brings together data-backed insights and in-depth ground intelligence into a holistic electoral campaign. We are passionate about our democracy and the politics that shape the world around us. We draw on some of the sharpest minds from distinguished institutions and diverse professional backgrounds to help us achieve our goal. The team brings in 7 years of experience in building electoral strategies that spark conversations, effect change, and help shape electoral and legislative ecosystems in our country. Job Summary: We are seeking a highly motivated and skilled Data Analyst to join our dynamic Project Management Office (PMO). This critical role involves developing, maintaining, and enhancing insightful PMO dashboards while also designing, implementing, and managing automated data pipelines. The ideal candidate will possess a strong blend of data analysis, visualization, and technical automation skills to ensure the PMO has timely, accurate data for tracking project performance, identifying trends, and making data-driven decisions. Key Responsibilities: PMO Dashboard Development & Management: Design, build, and maintain interactive dashboards using BI tools (e.g., Looker Studio, Tableau) to visualize key project metrics, resource allocation, timelines, risks, and overall PMO performance KPIs. Collaborate with PMO leadership and project managers to gather reporting requirements and translate them into effective data models and visualizations. Ensure data accuracy, consistency, and reliability within dashboards and reports. Perform data analysis to identify trends, potential issues, and areas for process improvement within project execution. Generate regular performance reports and support ad-hoc data requests from stakeholders. Data Management: Design, develop, implement, and maintain robust, automated data pipelines for Extract, Transform, Load (ETL/ELT) processes. Automate data collection from various sources including project management software, spreadsheets, databases, and APIs (e.g., Slack API). Load and process data efficiently into our data warehouse environment (e.g., Google BigQuery). Write and optimize SQL queries for data manipulation, transformation, and aggregation. Implement data quality checks, error handling, and monitoring for automated pipelines. Troubleshoot and resolve issues related to data extraction, transformation, loading, and pipeline failures. Document data sources, data models, pipeline architecture, and automation workflows. Required Qualifications & Skills: Bachelor's degree in Computer Science, Data Science, Statistics, Information Systems, Engineering, or a related quantitative field. Proven experience (approx. 2-3 years) in data analysis, business intelligence, data engineering, or a similar role. Strong proficiency in SQL for complex querying, data manipulation, and performance tuning. Hands-on experience building and maintaining dashboards using Tableau. Demonstrable experience in designing and automating data pipelines using scripting languages (Python preferred) and/or ETL/ELT tools. Solid understanding of data warehousing concepts, ETL/ELT principles, and data modeling. Excellent analytical, problem-solving, and critical thinking skills. Strong attention to detail and commitment to data accuracy. Good communication and collaboration skills, with the ability to interact with technical and non-technical stakeholders. Ability to work independently and manage priorities effectively. Preferred Qualifications & Skills: Experience working directly within a Project Management Office (PMO) or supporting project management functions. Familiarity with project management tools (e.g., Jira, Asana, MS Project) and concepts (Agile, Waterfall). Experience with cloud platforms, particularly Google Cloud Platform (GCP) and BigQuery. Experience with workflow orchestration tools (e.g., Airflow, Cloud Composer, Cloud Functions). Experience integrating data via APIs from various business systems. Basic understanding of data governance and data quality management practices. If you are a driven professional seeking a high-impact challenge and interested in joining a team of like-minded, motivated individuals who think strategically, act decisively, and get things done, email us at openings@varaheanalytics.com
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Senior Python Developer – AI/ML Document Automation Location: Hyderabad Work Mode: Hybrid Experience: 5+ Years Job Summary: We are looking for a highly skilled Senior Python Developer with deep expertise in AI/ML and document automation . The ideal candidate will lead the design and development of intelligent systems for extracting and processing structured and unstructured data from documents such as invoices, receipts, contracts, and PDFs. This role involves both hands-on coding and architectural contributions to scalable automation platforms. Roles and Responsibilities: Design and develop modular Python applications for document parsing and intelligent automation. Build and optimize ML/NLP pipelines for tasks like Named Entity Recognition (NER), classification, and layout-aware data extraction. Integrate rule-based and AI-driven techniques (e.g., regex, spaCy, PyMuPDF, Tesseract) to handle diverse document formats. Develop and deploy models via REST APIs using FastAPI or Flask, and containerize with Docker. Collaborate with cross-functional teams to define automation goals and data strategies. Conduct code reviews, mentor junior developers, and uphold best coding practices. Monitor model performance and implement feedback mechanisms for continuous improvement. Maintain thorough documentation of workflows, metrics, and architectural decisions. Mandatory Skills: Expert in Python (OOP, asynchronous programming, modular design). Strong foundation in machine learning algorithms and natural language processing techniques. Hands-on experience with Scikit-learn, TensorFlow, PyTorch, and Hugging Face Transformers. Proficient in developing REST APIs using FastAPI or Flask. Experience in PDF/text extraction using PyMuPDF, Tesseract, or similar tools. Skilled in regex-based extraction and rule-based NER. Familiar with Git, Docker, and any major cloud platform (AWS, GCP, or Azure). Exposure to MLOps tools such as MLflow, Airflow, or LangChain.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Overview: We are seeking a talented Data Engineer with expertise in Apache Spark, Python / Java and distributed systems. The ideal candidate will be skilled in creating and managing data pipelines using AWS. Key Responsibilities: Design, develop, and implement data pipelines for ingesting, transforming, and loading data at scale. Utilise Apache Spark for data processing and analysis. Utilise AWS services (S3, Redshift, EMR, Glue) to build and manage efficient data pipelines. Optimise data pipelines for performance and scalability, considering factors like partitioning, bucketing, and caching. Write efficient and maintainable Python code. Implement and manage distributed systems for data processing. Collaborate with cross-functional teams to understand data requirements and deliver optimal solutions. Ensure data quality and integrity throughout the data lifecycle. Qualifications: Proven experience with Apache Spark and Python / Java. Strong knowledge of distributed systems. Proficiency in creating data pipelines with AWS. Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience). Proven experience in designing and developing data pipelines using Apache Spark and Python. Experience with distributed systems concepts (Hadoop, YARN) is a plus. In-depth knowledge of AWS cloud services for data engineering (S3, Redshift, EMR, Glue). Familiarity with data warehousing concepts (data modeling, ETL) is preferred. Strong programming skills in Python (Pandas, NumPy, Scikit-learn are a plus). Experience with data pipeline orchestration tools (Airflow, Luigi) is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with additional AWS services (e.g., AWS Glue, AWS Lambda, Amazon Redshift). Familiarity with data warehousing and ETL processes. Knowledge of data governance and best practices. Have a good understanding of the oops concept. Hands-on experience with SQL database design Experience with Python, SQL, and data visualization/exploration tools
Posted 1 week ago
0.0 - 1.0 years
8 - 14 Lacs
Hyderabad, Telangana
On-site
Job Title: Senior Python Developer – AI/ML Document Automation Location: Hyderabad Work Mode: Hybrid Experience: 5+ Years Job Summary: We are looking for a highly skilled Senior Python Developer with deep expertise in AI/ML and document automation . The ideal candidate will lead the design and development of intelligent systems for extracting and processing structured and unstructured data from documents such as invoices, receipts, contracts, and PDFs. This role involves both hands-on coding and architectural contributions to scalable automation platforms. Roles and Responsibilities: Design and develop modular Python applications for document parsing and intelligent automation. Build and optimize ML/NLP pipelines for tasks like Named Entity Recognition (NER), classification, and layout-aware data extraction. Integrate rule-based and AI-driven techniques (e.g., regex, spaCy, PyMuPDF, Tesseract) to handle diverse document formats. Develop and deploy models via REST APIs using FastAPI or Flask, and containerize with Docker. Collaborate with cross-functional teams to define automation goals and data strategies. Conduct code reviews, mentor junior developers, and uphold best coding practices. Monitor model performance and implement feedback mechanisms for continuous improvement. Maintain thorough documentation of workflows, metrics, and architectural decisions. Mandatory Skills: Expert in Python (OOP, asynchronous programming, modular design). Strong foundation in machine learning algorithms and natural language processing techniques. Hands-on experience with Scikit-learn, TensorFlow, PyTorch, and Hugging Face Transformers. Proficient in developing REST APIs using FastAPI or Flask. Experience in PDF/text extraction using PyMuPDF, Tesseract, or similar tools. Skilled in regex-based extraction and rule-based NER. Familiar with Git, Docker, and any major cloud platform (AWS, GCP, or Azure). Exposure to MLOps tools such as MLflow, Airflow, or LangChain. Job Type: Full-time Pay: ₹800,000.00 - ₹1,400,000.00 per year Benefits: Provident Fund Schedule: Day shift Monday to Friday Application Question(s): Are you an immediate Joiner? Experience: Python : 2 years (Required) AI/ML: 2 years (Required) NLP: 1 year (Required) Location: Hyderabad, Telangana (Required) Work Location: In person
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Bangalore/Gurugram/Hyderabad YOE - 7+ years We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. You’ll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering.
Posted 1 week ago
4.0 - 9.0 years
4 - 8 Lacs
Pune
Work from Office
Experience: 4+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering:- Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
✅ Job Title: Data Engineer – Apache Spark, Scala, GCP & Azure 📍 Location: Gurugram (Hybrid – 3 days/week in office) 🕒 Experience: 5–10 Years 🧑💻 Type: Full-time 📩 Apply: Share your resume with the details listed below to vijay.s@xebia.com 🕐 Availability: Immediate joiners or max 2 weeks' notice period only 🚀 About the Role Xebia is looking for a skilled Data Engineer to join our fast-paced team in Gurugram. You will work on building and optimizing scalable data pipelines, processing large datasets using Apache Spark and Scala , and deploying on cloud platforms like GCP and Azure . If you're passionate about clean architecture, high-quality data flow, and performance tuning, this is the opportunity for you. 🔧 Key Responsibilities Design and develop robust ETL pipelines using Apache Spark Write clean and efficient data processing code in Scala Handle large-scale data movement, transformation, and storage Build solutions on Google Cloud Platform (GCP) and Microsoft Azure Collaborate with teams to define data strategies and ensure data quality Optimize jobs for performance and cost on distributed systems Document technical designs and ETL flows clearly for the team ✅ Must-Have Skills Apache Spark Scala ETL design & development Cloud platforms: GCP & Azure Strong understanding of Data Engineering best practices Solid communication and collaboration skills 🌟 Good-to-Have Skills Apache tools (Kafka, Beam, Airflow, etc.) Knowledge of data lake and data warehouse concepts CI/CD for data pipelines Exposure to modern data monitoring and observability tools 💼 Why Xebia? At Xebia, you’ll be part of a forward-thinking, tech-savvy team working on high-impact, global data projects. We prioritize clean code, scalable solutions, and continuous learning. Join us to build real-time, cloud-native data platforms that power business intelligence across industries. 📤 To Apply Please share your updated resume and include the following details in your email to vijay.s@xebia.com : Full Name: Total Experience: Current CTC: Expected CTC: Current Location: Preferred Xebia Location: Gurugram Notice Period / Last Working Day (if serving): Primary Skills: LinkedIn Profile URL: Note: Only candidates who can join immediately or within 2 weeks will be considered. Build intelligent, scalable data solutions with Xebia – let’s shape the future of data together. 📊🚀
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Hi All, We are hiring for Data Engineers, kindly refer the below skillsets: Mandatory Skills: GCP Cloud (especially BigQuery and DataProc) Big Data technologies Hadoop Hive Python / PySpark Airflow and DAG orchestration Preferred Skills: Experience with visualization tools such as Tableau or Power BI Familiarity with Jethro is a plus
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role - Cloud Architect – Analytics & Data Products We’re looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning , application/API hosting , and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins , AWS Code Pipeline , Code Build , or GitHub Actions . Deploy and host internal tools, APIs, and applications using ECS , EKS , Lambda , API Gateway , and ELB . Provision and support analytics and data platforms using S3 , Glue , Redshift , Athena , Lake Formation , and orchestration tools like Step Functions or Apache Airflow (MWAA) . Implement cloud security, networking, and compliance using IAM , VPC , KMS , CloudWatch , CloudTrail , and AWS Config . Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock , Sage Maker , or integrations with APIs like Open AI . Requirements 10-14 years of experience in cloud engineering, DevOps, or cloud architecture roles. Strong hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python , Bash ) and infrastructure automation. Experience deploying containerized workloads using Docker , ECS , EKS , or Fargate . Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect , DevOps Engineer ) are preferred.
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Qualification 6-8 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Good to have: Role Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience 6 to 8 years Job Reference Number 13024
Posted 1 week ago
3.0 - 5.0 years
0 - 0 Lacs
chennai
On-site
About Hexr Factory: We are always exploring the possibilities to bridge the connection between the physical and digital world. We design and build Metaverse & Digital twin technologies for the future of industry and entertainment. Experience: 3-5 years Title: Data Engineers You are a successful candidate if you have 3+ years of experience in data engineering, preferably with real-time systems. Proficient with Python, SQL, and distributed data systems (Kinesis, Spark, Flink, etc.). Strong understanding of event-driven architectures, data lakes, and message serialization. Experience with sensor data processing, telemetry ingestion, or mobility data is a plus. Familiarity with Docker, CI/CD, Kubernetes, and cloud-native architectures. Familiarity with building data pipelines & its workflows (eg: Airflow). Preferred Qualifications: Exposure to smart city platforms, V2X ecosystems or other timeseries paradigms. Experience integrating data from Camera and other sensors. If interested, Please share your resume to jobs@hexrfactory.com Work location: Chennai, Tamil Nadu Contact: 9884099499 Web: www.hexrfactory.com
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are looking for a Data Engineer with strong experience in cloud platforms (AWS & Azure) , Scala programming , and a solid understanding of data architecture and governance frameworks . You will play a key role in building, optimizing, and maintaining scalable data pipelines and systems while ensuring data quality, security, and compliance across the organization. Key Responsibilities Data Engineering & Development Design and develop reliable, scalable ETL/ELT data pipelines using Scala , SQL , and orchestration tools. Integrate and process structured, semi-structured, and unstructured data from various sources (APIs, databases, flat files, etc.). Develop solutions on AWS (e.g., S3, Glue, Redshift, EMR) and Azure (e.g., Data Factory, Synapse, Blob Storage). Cloud & Infrastructure Build cloud-native data solutions that align with enterprise architecture standards. Leverage IaC tools (Terraform, CloudFormation, ARM templates) to deploy and manage infrastructure. Monitor performance, cost, and security posture of data environments in both AWS and Azure. Data Architecture & Governance Collaborate with data architects to define and implement logical and physical data models. Apply data governance principles including data cataloging , lineage tracking , data privacy , and compliance (e.g., GDPR) . Support the enforcement of data policies and data quality standards across data domains. Collaboration & Communication Work cross-functionally with data analysts, scientists, architects, and business stakeholders to support data needs. Participate in Agile ceremonies and contribute to sprint planning and reviews. Maintain clear documentation of pipelines, data models, and data flows. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related field. 3–6 years of experience in data engineering or data platform development. Hands-on experience with AWS and Azure data services. Proficient in Scala for data processing (e.g., Spark, Kafka Streams). Strong SQL skills and familiarity with distributed systems. Experience with orchestration tools (e.g., Apache Airflow, Azure Data Factory).
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role Data Analysts are the drivers of how data is leveraged in solving business problems within their area. They are able to use their experience to consult with stakeholders in problem-definition, setting success metrics and shaping the way forward through data insights and effective communication with their audience. We are looking for experienced data analysts who would be able to deep-dive into data to generate insights, run root cause analysis autonomously and manage business stakeholders largely independently (seeking for help in complex scenarios) based on their experience by prioritizing business impact and efficiently adapting to business needs. About The Team The successful candidate will be a key member of the Payments Accounting Data Analytics Team. They will be responsible for generating data-driven analysis, reporting, root cause analysis, and data reconciliations to support stakeholders, and help maintain the complex data ecosystem. B.Responsible Works independently on data collection and preparation. Uses their past experience and seeks for help in complex scenarios to translate business problems into data driven insights. Leverages available cloud big data platforms to run root cause analysis, data reconciliations and shares the insights with the business team. Maintains and drives key reports, metrics and workflows running within their scope Is able to communicate results and outcomes clearly to stakeholders based on their knowledge and experience. Actively participates in business and/or analytics team activities and suggests ways of achieving objectives (standup, planning meeting, retrospectives) Networks and proactively connects with craft peers beyond the team scope Has strong understanding of the big data ecosystems Collaborates and is open to giving and receiving feedback with peers and direct stakeholders. Is flexible in adopting and proposing new approaches and expanding their technical competencies when a more efficient way presents itself Expected to get significant deep knowledge about the operational, tactical and strategic workings of the department. Has a main focus on business and technical opportunities. B.Skilled Educational background in Quantitative field - Preferably a Master's degree 3-5 years of experience in data analytics, Insight generation and data visualization Should have executed big data analytics projects in Industry setting Advanced knowledge of SQL, ideally with experience in Snowflake Good knowledge with Python/Py-Spark Experience of working with ETL and Data Modelling tools like Airflow, Dagster and DBT Knowledge and experience using data analysis and visualization tools (e.g: tableau, data studio, powerbi, mixpanel, etc) Familiarity with Cloud data platforms like AWS and GIT version control is a plus Familiarity with financial metrics is a big plus Strong communication and stakeholder management skills Able to understand details while keeping an eye on the bigger picture Pre-Employment Screening If your application is successful, your personal data may be used for a pre-employment screening check by a third party as permitted by applicable law. Depending on the vacancy and applicable law, a pre-employment screening may include employment history, education and other information (such as media information) that may be necessary for determining your qualifications and suitability for the position.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: Snowflake Developer Location: Pune (Kalyani Nagar) Exp: 6+ Key Responsibilities: •Design, develop, and maintain scalable Snowflake data warehouse solutions. •Write and optimize complex SQL queries for data extraction, transformation, and reporting. •Develop and manage Snowflake stored procedures using SQL and JavaScript. •Implement and manage data integration between Snowflake and external systems (e.g., using ETL tools, APIs, or Snowpipe). •Create and maintain data models and ensure data quality and consistency across environments. •Collaborate with data engineers, analysts, and business stakeholders to understand requirements and deliver reliable solutions. •Monitor performance, diagnose issues, and implement performance tuning best practices. •Implement access controls and security policies aligned with enterprise standards. Required Skills & Qualifications: •Strong hands-on experience with Snowflake platform and architecture. •Should know python libraries. •Advanced proficiency in SQL, including writing and optimizing complex queries. •Experience with stored procedures, user-defined functions (UDFs), and task scheduling in Snowflake. •Familiarity with data integration tools (e.g., Informatica, Talend, Apache Airflow, DBT, Fivetran, or custom Python scripts). •Experience with data modeling (star/snowflake schemas) and data warehouse design. •Knowledge of cloud platforms (AWS, Azure, or GCP) and how Snowflake integrates with them. •Experience working with large datasets and performance tuning of data loads/queries. •Strong problem-solving and communication skills. Please share the resume to hema@synapsetechservice.com
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Backend & MLOps Engineer – Integration, API, and Infrastructure Expert 1. Role Objective: Responsible for building robust backend infrastructure, managing ML operations, and creating scalable APIs for AI applications. Must excel in deploying and maintaining AI products in production environments with high availability and security standards. The engineer will be expected to build secure, scalable backend systems that integrate AI models into services (REST, gRPC), manage data pipelines, enable model versioning, and deploy containerized applications in secure (air-gapped) Naval infrastructure. 2. Key Responsibilities: 2.1. Create RESTful and/or gRPC APIs for model services. 2.2. Containerize AI applications and maintain Kubernetes-compatible Docker images. 2.3. Develop CI/CD pipelines for model training and deployment. 2.4. Integrate models as microservices using TorchServe, Triton, or FastAPI. 2.5. Implement observability (metrics, logs, alerts) for deployed AI pipelines. 2.6. Build secured data ingestion and processing workflows (ETL/ELT). 2.7. Optimize deployments for CPU/GPU performance, power efficiency, and memory usage 3. Educational Qualifications Essential Requirements: 3.1. B.Tech/ M.Tech in Computer Science, Information Technology, or Software Engineering. 3.2. Strong foundation in distributed systems, databases, and cloud computing. 3.3. Minimum 70% marks or 7.5 CGPA in relevant disciplines. Professional Certifications: 3.4. AWS Solutions Architect/DevOps Engineer Professional 3.5. Google Cloud Professional ML Engineer or DevOps Engineer 3.6. Azure AI Engineer or DevOps Engineer Expert. 3.7. Kubernetes Administrator (CKA) or Developer (CKAD). 3.8. Docker Certified Associate Core Skills & Tools 4. Backend Development: 4.1. Languages: Python, FastAPI, Flask, Go, Java, Node.js, Rust (for performance-critical components) 4.2. Web Frameworks: FastAPI, Django, Flask, Spring Boot, Express.js. 4.3. API Development: RESTful APIs, GraphQL, gRPC, WebSocket connections. 4.4. Authentication & Security: OAuth 2.0, JWT, API rate limiting, encryption protocols. 5. MLOps & Model Management: 5.1. ML Platforms: MLflow, Kubeflow, Apache Airflow, Prefect 5.2. Model Serving: TensorFlow Serving, TorchServe, ONNX Runtime, NVIDIA Triton, BentoML 5.3. Experiment Tracking: Weights & Biases, Neptune, ClearML 5.4. Feature Stores: Feast, Tecton, Amazon SageMaker Feature Store 5.5. Model Monitoring: Evidently AI, Arize, Fiddler, custom monitoring solutions 6. Infrastructure & DevOps: 6.1. Containerization: Docker, Podman, container optimization. 6.2. Orchestration: Kubernetes, Docker Swarm, OpenShift. 6.3. Cloud Platforms: AWS, Google Cloud, Azure (multi-cloud expertise preferred). 6.4. Infrastructure as Code: Terraform, CloudFormation, Pulumi, Ansible. 6.5. CI/CD: Jenkins, GitLab CI, GitHub Actions, ArgoCD. 6.6. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins. 7. Database & Storage: 7.1. Relational: PostgreSQL, MySQL, Oracle (for enterprise applications) 7.2. NoSQL: MongoDB, Cassandra, Redis, Elasticsearch 7.3. Vector Databases: Pinecone, Weaviate, Chroma, Milvus 7.4. Data Lakes: Apache Spark, Hadoop, Delta Lake, Apache Iceberg 7.5. Object Storage: AWS S3, Google Cloud Storage, MinIO 7.6. Backend: Python (FastAPI, Flask), Node.js (optional) 7.7. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins 8. Secure Deployment: 8.1. Military-grade security protocols and compliance 8.2. Air-gapped deployment capabilities 8.3. Encrypted data transmission and storage 8.4. Role-based access control (RBAC) & IDAM integration 8.5. Audit logging and compliance reporting 9. Edge Computing: 9.1. Deployment on naval vessels with air gapped connectivity. 9.2. Optimization of applications for resource-constrained environment. 10. High Availability Systems: 10.1. Mission-critical system design with 99.9% uptime. 10.2. Disaster recovery and backup strategies. 10.3. Load balancing and auto-scaling. 10.4. Failover mechanisms for critical operations. 11. Cross-Compatibility Requirements: 11.1. Define and expose APIs in a documented, frontend-consumable format (Swagger/OpenAPI). 11.2. Develop model loaders for AI Engineer's ONNX/ serialized models. 11.3. Provide UI developers with test environments, mock data, and endpoints. 11.4. Support frontend debugging, edge deployment bundling, and user role enforcement. 12. Experience Requirements 12.1. Production experience with cloud platforms and containerization. 12.2. Experience building and maintaining APIs serving millions of requests. 12.3. Knowledge of database optimization and performance tuning. 12.4. Experience with monitoring and alerting systems. 12.5. Architected and deployed large-scale distributed systems. 12.6. Led infrastructure migration or modernization projects. 12.7. Experience with multi-region deployments and disaster recovery. 12.8. Track record of optimizing system performance and cost
Posted 1 week ago
6.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Take ownership of pipeline stability and performance across our GCP-based stack (BigQuery, GCS, Dataprep/Dataflow) Lead the enhancement of our existing ETL workflows to support better modularity, reusability, and error handling Help introduce lightweight governance practices —including column-level validation, source tracking, and transformation transparency Support development of a semantic layer (e.g., KPI definitions, normalized metric naming) to reduce rework and support downstream users Work with analysts and dashboard developers to structure data outputs for intuitive use and parameterization Collaborate with team leadership to prioritize improvements based on impact and feasibility Support platform readiness for automated reporting , predictive modeling , and AI-enhanced analysis Contribute to team culture through clear documentation, mentoring, and code review Participate in hiring, onboarding, and evolving our internal standards What We’re Looking For Must-Have: 4–6+ years of experience in data engineering, preferably in a fast-paced agency or multi-client environment Solid command of Google Cloud Platform , especially BigQuery, GCS, and Cloud Dataprep (Alteryx) or Dataflow Strong SQL and Python skills with a focus on transformation and data reliability Experience building and maintaining ETL pipelines in production Familiarity with metadata-driven development , version control, and task orchestration (Airflow or equivalent) Proven ability to balance individual execution with team collaboration Clear communicator, able to translate technical trade-offs to non-technical stakeholders Nice-to-Have: Experience applying basic data governance principles (e.g., lineage tracking, validation frameworks, naming conventions) Exposure to building or maintaining a semantic layer (via dbt, LookML, etc.) Familiarity with AI/ML workflows or tooling for automated insight generation Understanding of marketing or media datasets Experience developing custom marketing attribution models Experience mentoring junior team members or participating in code/process standardization
Posted 1 week ago
2.0 years
0 Lacs
India
Remote
Job description L1 Support – Data Engineering (Remote, South India) Location: Permanently based in South India (any city) – non-negotiable Work Mode: Remote | 6 days/week | 24x7x365 support (rotational shifts) Salary Range - Between INR 2.5 to 3 Lacs Per Annum Experience: 2 years Language: English proficiency mandatory ; Hindi is a plus About the Role We're looking for an experienced and motivated L1 Support Engineer – Data Engineering to join our growing team. If you have solid exposure to AWS , SQL , and Python scripting , and you're ready to thrive in a 24x7 support environment—this role is for you! What You’ll Do Monitor and support AWS services (S3, EC2, CloudWatch, IAM) Handle SQL-based issue resolution and data analysis Run and maintain Python scripts ; Shell scripting is a plus Support ETL pipelines and data workflows Monitor Apache Airflow DAGs and resolve basic issues Collaborate with cross-functional and multicultural teams What We’re Looking For B.Tech or MCA preferred , but candidates with a Bachelor’s degree in any field and the right skillset are welcome to apply. 2 years of Data Engineering Support or similar experience Strong skills in AWS , SQL , Python , and ETL processes Familiarity with data warehousing (Amazon Redshift or similar) Ability to work rotational shifts in a 6-day, 24x7 environment Excellent communication and problem-solving skills English fluency is required ; Hindi is an advantage
Posted 1 week ago
80.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title Associate Data Engineer (Internship Program to Full-time Employee) Job Description For more than 80 years, Kaplan has been a trailblazer in education and professional advancement. We are a global company at the intersection of education and technology, focused on collaboration, innovation, and creativity to deliver a best in class educational experience and make Kaplan a great place to work. Our offices in India opened in Bengaluru in 2018. Since then, our team has fueled growth and innovation across the organization, impacting students worldwide. We are eager to grow and expand with skilled professionals like you who use their talent to build solutions, enable effective learning, and improve students’ lives. The future of education is here and we are eager to work alongside those who want to make a positive impact and inspire change in the world around them. The Associate Data Engineer at Kaplan North America (KNA) within the Analytics division will work with world class psychometricians, data scientists and business analysts to forever change the face of education. This role is a hands-on technical expert who will help implement an Enterprise Data Warehouse powered by AWS RA3 as a key feature of our Lake House architecture. The perfect candidate possesses strong technical knowledge in data engineering, data observability, Infrastructure automation, data ops methodology, systems architecture, and development. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You should be able to work with business customers in a fast-paced environment understanding the business requirements and implementing data & reporting solutions. Above all you should be passionate about working with big data and someone who loves to bring datasets together to answer business questions and drive change Responsibilities You design, implement, and deploy data solutions. You solve difficult problems generating positive feedback. Build different types of data warehousing layers based on specific use cases Lead the design, implementation, and successful delivery of large-scale, critical, or difficult data solutions involving a significant amount of work Build scalable data infrastructure and understand distributed systems concepts from a data storage and compute perspective Utilize expertise in SQL and have a strong understanding of ETL and data modeling Ensure the accuracy and availability of data to customers and understand how technical decisions can impact their business’s analytics and reporting Be proficient in at least one scripting/programming language to handle large volume data processing. 30-day notification period preferred Requirements In-depth knowledge of the AWS stack (RA3, Redshift, Lambda, Glue, SnS). Experience in data modeling, ETL development and data warehousing. Effective troubleshooting and problem-solving skills Strong customer focus, ownership, urgency and drive. Excellent verbal and written communication skills and the ability to work well in a team Preferred Qualification Proficiency with Airflow, Tableau & SSRS Location Bangalore, KA, India Additional Locations Employee Type Employee Job Functional Area Systems Administration/Engineering Business Unit 00091 Kaplan Higher ED At Kaplan, we recognize the importance of attracting and retaining top talent to drive our success in a competitive market. Our salary structure and compensation philosophy reflect the value we place on the experience, education, and skills that our employees bring to the organization, taking into consideration labor market trends and total rewards. All positions with Kaplan are paid at least $15 per hour or $31,200 per year for full-time positions. Additionally, certain positions are bonus or commission-eligible. And we have a comprehensive benefits package, learn more about our benefits here. Diversity & Inclusion Statement Kaplan is committed to cultivating an inclusive workplace that values diversity, promotes equity, and integrates inclusivity into all aspects of our operations. We are an equal opportunity employer and all qualified applicants will receive consideration for employment regardless of age, race, creed, color, national origin, ancestry, marital status, sexual orientation, gender identity or expression, disability, veteran status, nationality, or sex. We believe that diversity strengthens our organization, fuels innovation, and improves our ability to serve our students, customers, and communities. Learn more about our culture here. Kaplan considers qualified applicants for employment even if applicants have an arrest or conviction in their background check records. Kaplan complies with related background check regulations, including but not limited to, the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. There are various positions where certain convictions may disqualify applicants, such as those positions requiring interaction with minors, financial records, or other sensitive and/or confidential information. Kaplan is a drug-free workplace and complies with applicable laws.
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Company Description Profile Solution is an innovative supplier of thermal management and airflow solutions for the data center international telecom, and IT markets. The company is headquartered in Mumbai with an office in Singapore. Profile Solution specializes in offering products such as perforated high volume tiles, intelligent active floor tiles, overhead air-movers, air blocks, rack baffles, raised floor partition solutions, thermal testing, and cooling audits analysis. Role Description This is a full-time on-site role for a Sales & Estimation Engineer at Profile Solution in Mumbai. The Sales & Estimation Engineer will be responsible for conducting on-site audits to discover variables in data centers, recommend solutions, and work with clients to implement energy-efficient cooling approaches. The role involves collaborating with top cloud computing companies to provide thermal containment infrastructure solutions. Qualifications Sales and Estimation skills Technical knowledge in thermal management and airflow solutions Experience in conducting on-site audits and analyzing data center variables Strong communication and presentation skills Ability to collaborate effectively with clients and team members Bachelor's degree in Engineering or related field Previous experience in the data center industry is a plus
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Responsibilities: ✅Build and optimize scalable data pipelines using Python, PySpark, and SQL. ✅Design and develop on AWS stack (S3, Glue, EMR, Athena, Redshift, Lambda). ✅Leverage Databricks for data engineering workflows and orchestration. ✅Implement ETL/ELT processes with strong data modeling (Star/Snowflake schemas). ✅Work on job orchestration using Airflow, Databricks Jobs, or AWS Step Functions. ✅Collaborate with agile, cross-functional teams to deliver reliable data solutions. ✅Troubleshoot and optimize large-scale distributed data environments. Must-Have: ✅4–6+ years in Data Engineering. ✅Hands-on experience in Python, SQL, PySpark, and AWS services. ✅Solid Databricks expertise. ✅Experience with DevOps tools: Git, Jenkins, GitHub Actions. ✅Understanding of data lake/lakehouse/warehouse architectures. Good to Have: ✅AWS/Databricks certifications. ✅Experience with data observability tools (Monte Carlo, Datadog). ✅Exposure to regulated domains like Healthcare or Finance. ✅Familiarity with streaming tools (Kafka, Kinesis, Spark Streaming). ✅Knowledge of modern data concepts (Data Mesh, Data Fabric). ✅Experience with visualization tools: Power BI, Tableau, QuickSight.
Posted 1 week ago
4.0 - 9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Mindera At Mindera , we craft software with people we love. We're a collaborative, global team of engineers who value open communication, great code, and building impactful products. We're looking for a talented C#/.NET Developer to join our growing team in Gurugram and help us build scalable, high-quality software systems. Requirements What You'll Do Build, maintain, and scale robust C#/.NET applications in a fast-paced Agile environment. Work closely with product owners and designers to bring features to life. Write clean, maintainable code following SOLID and OOP principles. Work with SQL/NoSQL databases, optimizing queries and schema designs. Collaborate in a Scrum or Kanban environment with engineers around the world. Use Git for version control and participate in code reviews. Contribute to our CI/CD pipelines and automated testing workflows. Must-Have Skills What We're Looking For 4-9 years of hands-on experience with C# and .NET technologies. Solid understanding of Object-Oriented Programming (OOP) and clean code principles. Proven experience working with databases (SQL or NoSQL). Experience in an Agile team (Scrum/Kanban). Familiarity with Git and collaborative development practices. Exposure to CI/CD pipelines and test automation. Nice-to-Have Skills Experience with Rust (even hobbyist experience is valued). Background working with Python or Scala for Spark-based applications. Hands-on with Docker and container-based architecture. Familiarity with Kubernetes for orchestration. Experience working with Apache Airflow for data workflows. Cloud experience with Google Cloud Platform (GCP) or Microsoft Azure. Benefits We Offer Flexible working hours (self-managed) Competitive salary Annual bonus, subject to company performance Access to Udemy online training and opportunities to learn and grow within the role About Mindera At Mindera we use technology to build products we are proud of, with people we love. Software Engineering Applications, including Web and Mobile, are at the core of what we do at Mindera. We partner with our clients, to understand their product and deliver high performance, resilient and scalable software systems that create an impact in their users and businesses across the world. You get to work with a bunch of great people, where the whole team owns the project together. Our culture reflects our lean and self-organisation attitude. We encourage our colleagues to take risks, make decisions, work in a collaborative way and talk to everyone to enhance communication. We are proud of our work and we love to learn all and everything while navigating through an Agile, Lean and collaborative environment. Follow our Linkedln page - https://tinyurl.com/minderaindia Check ot our Blog: http://mindera.com/ and our Handbook: http://bit.ly/MinderaHandbook Our offices are located: Aveiro, Portugal | Porto, Portugal | Leicester, UK | San Diego, USA | San Francisco, USA | Chennai, India | Bengaluru, India
Posted 1 week ago
0.0 - 6.0 years
0 - 0 Lacs
Haryana, Haryana
On-site
Job Overview We are seeking a skilled and detail-oriented HVAC Engineer with experience in cleanroom HVAC systems, including ducting, mechanical piping, and sheet metal works. The ideal candidate will assist in site execution, technical coordination, and quality assurance in line with cleanroom standards for pharmaceutical, biotech, or industrial facilities. Key Responsibilities : Support end-to-end HVAC system execution, including ducting, AHU installation, chilled water piping, and insulation. Supervise and coordinate day-to-day HVAC activities at the site in line with approved drawings and technical specifications. Review and interpret HVAC layouts, shop drawings, and coordination drawings for proper implementation. Ensure HVAC materials (ducts, dampers, diffusers, filters, etc.) meet project specifications and site requirements. Coordinate with other services (plumbing, electrical, BMS, fire-fighting) to ensure conflict-free execution. Monitor subcontractor work and labor force for compliance with timelines, quality, and safety standards. Assist in air balancing, testing & commissioning activities including HEPA filter installation and pressure validation. Conduct site surveys, measurements, and prepare daily/weekly progress reports. Maintain records for material movement, consumption, and inspection checklists. Work closely with the design and planning team to address technical issues and implement design revisions. Ensure cleanroom HVAC work complies with ISO 14644, GMP guidelines, and other regulatory standards. Required Skills & Qualifications : Diploma / B.Tech / B.E. in Mechanical Engineering or equivalent. 3–6 years of site execution experience in HVAC works, preferably in cleanroom or pharma/industrial MEP projects. Sound knowledge of duct fabrication, SMACNA standards, GI/SS materials, and cleanroom duct installation techniques. Hands-on experience with HVAC drawings, site measurement, and installation planning. Familiarity with testing procedures such as DOP/PAO testing, air balancing, and filter integrity testing. Proficient in AutoCAD, MS Excel, and basic computer applications. Good communication skills, site discipline, and teamwork. Desirable Attributes : Knowledge of cleanroom classifications and airflow management. Ability to manage vendors, material tracking, and basic troubleshooting. Familiar with safety practices and quality control procedures on site. Job Type: Full-time Pay: ₹30,000.00 - ₹50,000.00 per month Benefits: Health insurance Life insurance Provident Fund Schedule: Day shift Supplemental Pay: Overtime pay Ability to commute/relocate: Haryana, Haryana: Reliably commute or planning to relocate before starting work (Preferred) Language: english (Preferred) Work Location: In person
Posted 1 week ago
3.0 years
15 - 20 Lacs
Madurai, Tamil Nadu
On-site
Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra “Clients Vision is our Mission”. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Techmangohttps://www.techmango.net/ Job Title: GCP Data Engineer Location: Madurai Experience: 5+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Engineer, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 5+ years of experience in data architecture, data engineering, or enterprise data platforms. Minimum 3 years of hands-on experience in GCP Data Service. Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner. Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema). Experience with real-time data processing, streaming architectures, and batch ETL pipelines. Good understanding of IAM, networking, security models, and cost optimization on GCP. Prior experience in leading cloud data transformation projects. Excellent communication and stakeholder management skills. Preferred Qualifications: GCP Professional Data Engineer / Architect Certification. Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics. Exposure to AI/ML use cases and MLOps on GCP. Experience working in agile environments and client-facing roles. What We Offer: Opportunity to work on large-scale data modernization projects with global clients. A fast-growing company with a strong tech and people culture. Competitive salary, benefits, and flexibility. Collaborative environment that values innovation and leadership. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Application Question(s): Current CTC ? Expected CTC ? Notice Period ? (If you are serving Notice period please mention the Last working day) Experience: GCP Data Architecture : 3 years (Required) BigQuery: 3 years (Required) Cloud Composer (Airflow): 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person
Posted 1 week ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Join our dynamic and high-impact Data team as a Data Engineer, where you'll be responsible for safely receiving and storing trading-related data for the India teams, as well as operating and improving our shared data access and data processing systems. This is a critical role in the organisation as the data platform drives a huge range of trader analysis, simulation, reporting and insights. The ideal candidate should have work experience in systems engineering, preferably with prior exposure to financial markets and with proven working knowledge in the fields of Linux administration, orchestration and automation tools, systems hardware architecture as well as storage and data protection technologies. Your Core Responsibilities: Manage and monitor all distributed systems, storage infrastructure, and data processing platforms, including HDFS, Kubernetes, Dremio, and in-house data pipelines Drive heavy focus on systems automation and CI/CD to enable rapid deployment of hardware and software solutions Collaborate closely with systems and network engineers, traders, and developers to support and troubleshoot their queries Stay up to date with the latest technology trends in the industry; propose, evaluate, and implement innovative solutions Your Skills and Experience: 5–7 years of experience in managing large-scale multi-petabyte data infrastructure in a similar role Advanced knowledge of Linux system administration and internals, with proven ability to troubleshoot issues in Linux environments Deep expertise in at least one of the following technologies: Kafka, Spark, Cassandra/Scylla, or HDFS Strong working knowledge of Docker, Kubernetes, and Helm Experience with data access technologies such as Dremio and Presto Familiarity with workflow orchestration tools like Airflow and Prefect Exposure to cloud platforms such as AWS, GCP, or Azure Proficiency with CI/CD pipelines and version control systems like Git Understanding of best practices in data security and compliance Demonstrated ability to solve problems proactively and creatively with a results-oriented mindset Quick learner with excellent troubleshooting skills High degree of flexibility and adaptability About Us IMC is a global trading firm powered by a cutting-edge research environment and a world-class technology backbone. Since 1989, we’ve been a stabilizing force in financial markets, providing essential liquidity upon which market participants depend. Across our offices in the US, Europe, Asia Pacific, and India, our talented quant researchers, engineers, traders, and business operations professionals are united by our uniquely collaborative, high-performance culture, and our commitment to giving back. From entering dynamic new markets to embracing disruptive technologies, and from developing an innovative research environment to diversifying our trading strategies, we dare to continuously innovate and collaborate to succeed.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France