Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position-Azure Data Engineer Location- Hyderabad Mandatory Skills- Azure Databricks, pyspark Experience-5 to 9 Years Notice Period- 0 to 30 days/ Immediately Joiner/ Serving Notice period Interview Date- 13-June-25 Interview Mode- Virtual Drive Must have Experience: Strong design and data solutioning skills PySpark hands-on experience with complex transformations and large dataset handling experience Good command and hands-on experience in Python. Experience working with following concepts, packages, and tools, Object oriented and functional programming NumPy, Pandas, Matplotlib, requests, pytest Jupyter, PyCharm and IDLE Conda and Virtual Environment Working experience must with Hive, HBase or similar Azure Skills Must have working experience in Azure Data Lake, Azure Data Factory, Azure Databricks, Azure SQL Databases Azure DevOps Azure AD Integration, Service Principal, Pass-thru login etc. Networking – vnet, private links, service connections, etc. Integrations – Event grid, Service Bus etc. Database skills Oracle, Postgres, SQL Server – any one database experience Oracle PL/SQL or T-SQL experience Data modelling Thank you Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role We are looking for a highly skilled Senior Data Engineer with strong expertise in Apache Spark and Databricks to join our growing data engineering team. You will be responsible for designing, developing, and optimizing scalable data pipelines and applications using modern cloud data technologies. This is a hands-on role requiring deep technical knowledge, strong problem-solving skills, and a passion for building efficient, high-performance data solutions that drive business value. Responsibilities: Design, develop, and implement scalable data pipelines and applications using Apache Spark and Databricks, adhering to industry best practices. Perform in-depth performance tuning and optimization of Spark applications within the Databricks environment. Troubleshoot complex issues related to data ingestion, transformation, and pipeline execution. Collaborate with cross-functional teams including data scientists, analysts, and architects to deliver end-to-end data solutions. Continuously evaluate and adopt new technologies and tools in the Databricks and cloud ecosystem. Optimize Databricks cluster configurations for cost-effectiveness and performance. Apply data engineering principles to enable high-quality data ingestion, transformation, and delivery processes. Document technical designs, development processes, and operational procedures. Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field. 10+ years of experience in data engineering or big data development. 5+ years of hands-on experience with Apache Spark and Databricks. Deep understanding of Spark internals, Spark Streaming, and Delta Lake. Experience in developing solutions using Azure Data Services including: Azure Databricks, Azure Data Factory, Azure DevOps, Azure Functions, Azure SQL Database, Azure Event Grid, Cosmos DB. Familiarity with Striim or similar real-time data integration platforms is a plus. Proficient in PySpark or Scala. Strong experience in performance tuning, cost optimization, and cluster management in Databricks. Solid understanding of data warehousing, ETL/ELT pipelines, and data modelling. Experience working with cloud platforms (Azure preferred; AWS/GCP is a plus). Familiarity with Agile/Scrum methodologies. Preferred Qualifications Databricks Certified Professional Data Engineer certification is a strong plus. Strong communication skills—both written and verbal—with the ability to convey technical concepts to non-technical stakeholders. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
Remote
Job Title: Azure Data Engineer Experience Required: 5+ Years Location: Remote Employment Type: Full-time Job Summary: We are looking for a skilled Azure Data Engineer with 5 years of experience in building and optimizing data pipelines and architectures using Azure services. The ideal candidate will be proficient in big data processing, ETL/ELT pipelines, and Azure-based data solutions. You will work closely with data architects, analysts, and business stakeholders to ensure data quality, availability, and performance. Key Responsibilities: Design, develop, and maintain scalable and efficient data pipelines using Azure Data Factory , Databricks , and Azure Synapse Analytics Ingest data from multiple sources (structured, semi-structured, and unstructured) into Azure Data Lake / Data Warehouse Build and optimize ETL/ELT workflows for data transformation and integration Ensure data integrity and implement monitoring, logging, and alerting for pipelines Collaborate with data scientists and analysts to support advanced analytics and machine learning use cases Develop and maintain CI/CD pipelines for data solutions using tools like Azure DevOps Implement data security , governance , and compliance best practices Performance tuning and query optimization of SQL-based solutions on Azure Required Skills: Strong experience with Azure Data Factory , Azure Synapse , Azure Data Lake Storage (ADLS) , and Azure Databricks Solid hands-on experience in SQL , PySpark , Python , or Scala Proficiency in designing and implementing data models , partitioning , and data lake architectures Experience with Azure SQL Database , Cosmos DB , or SQL Server Knowledge of Azure DevOps , Git , and CI/CD processes Understanding of Delta Lake , Parquet , ORC , and file formats used in big data Familiarity with data governance frameworks and security models on Azure Preferred Qualifications: Azure certification: Microsoft Certified: Azure Data Engineer Associate (DP-203) Experience working in Agile/Scrum environments Experience integrating data from on-premises to cloud environments Familiarity with Power BI , Azure Monitor , Log Analytics , or Terraform for infrastructure provisioning Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
India
On-site
Mandatory Skills : Azure Cloud Technologies, Azure Data Factory, Azure Databricks (Advance Knowledge), PySpark, CI/CD Pipeline (Jenkins, GitLab CI/CD or Azure DevOps), Data Ingestion, SOL Seeking a skilled Data Engineer with expertise in Azure cloud technologies, data pipelines, and big data processing. The ideal candidate will be responsible for designing, developing, and optimizing scalable data solutions. Responsibilities Azure Databricks and Azure Data Factory Expertise: Demonstrate proficiency in designing, implementing, and optimizing data workflows using Azure Databricks and Azure Data Factory. Provide expertise in configuring and managing data pipelines within the Azure cloud environment. PySpark Proficiency: Possess a strong command of PySpark for data processing and analysis. Develop and optimize PySpark code to ensure efficient and scalable data transformations. Big Data & CI/CD Experience: Ability to troubleshoot and optimize data processing tasks on large datasets. Design and implement automated CI/CD pipelines for data workflows. This involves using tools like Jenkins, GitLab CI/CD, or Azure DevOps to automate the building, testing, and deployment of data pipelines. Data Pipeline Development & Deployment: Design, implement, and maintain end-to-end data pipelines for various data sources and destinations. This includes unit tests for individual components, integration tests to ensure that different components work together correctly, and end-to-end tests to verify the entire pipeline's functionality. Familiarity with Github/Repo for deployment of code Ensure data quality, integrity, and reliability throughout the entire data pipeline. Extraction, Ingestion, and Consumption Frameworks: Develop frameworks for efficient data extraction, ingestion, and consumption. Implement best practices for data integration and ensure seamless data flow across the organization. Collaboration and Communication: Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions. Communicate effectively with stakeholders to gather and clarify data-related requirements. Requirements Bachelor’s or master’s degree in Computer Science, Data Engineering, or a related field. 4+ years of relevant hands-on experience in data engineering with Azure cloud services and advanced Databricks. Strong analytical and problem-solving skills in handling large-scale data pipelines. Experience in big data processing and working with structured & unstructured datasets. Expertise in designing and implementing data pipelines for ETL workflows. Strong proficiency in writing optimized queries and working with relational databases. Experience in developing data transformation scripts and managing big data processing using PySpark.. Skills: sol,azure,azure databricks,sql,pyspark,data ingestion,azure cloud technologies,azure datafactory,azure data factory,ci/cd pipeline (jenkins, gitlab ci/cd or azure devops),azure databricks (advance knowledge),ci/cd pipelines Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Chandigarh, India
On-site
Experience Required: 4+ Years Key Responsibility: Design, build, and maintain scalable and reliable data pipelines on Databricks, Snowflake, or equivalent cloud platforms. Ingest and process structured, semi-structured, and unstructured data from a variety of sources including APIs, RDBMS, and file systems. Perform data wrangling, cleansing, transformation, and enrichment using PySpark, Pandas, NumPy, or similar libraries. Optimize and manage large-scale data workflows for performance, scalability, and cost-efficiency. Write and optimize complex SQL queries for transformation, extraction, and reporting. Design and implement efficient data models and database schemas with appropriate partitioning and indexing strategies for Data Warehouse or Data Mart. Leverage cloud services (e.g., AWS S3, Glue, Kinesis, Lambda) for storage, processing, and orchestration. Build containerized solutions using Docker and manage deployment pipelines via CI/CD tools such as Azure DevOps, GitHub Actions, or Jenkins. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place! Our approach is simple — empower engineers with the best tools possible to make an impact within their industry. We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing. As a Senior Data Engineer at Rearc, you will be at the forefront of driving technical excellence within our data engineering team. Your expertise in data architecture, cloud-native solutions, and modern data processing frameworks will be essential in designing workflows that are optimized for efficiency, scalability, and reliability. You'll leverage tools like Databricks, PySpark, and Delta Lake to deliver cutting-edge data solutions that align with business objectives. Collaborating with cross-functional teams, you will design and implement scalable architectures while adhering to best practices in data management and governance . Building strong relationships with both technical teams and stakeholders will be crucial as you lead data-driven initiatives and ensure their seamless execution. What You Bring 8+ years of experience in data engineering, showcasing expertise in diverse architectures, technology stacks, and use cases. Strong expertise in designing and implementing data warehouse and data lake architectures, particularly in AWS environments. Extensive experience with Python for data engineering tasks, including familiarity with libraries and frameworks commonly used in Python-based data engineering workflows. Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue. Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask. Proficiency with Spark and Databricks is highly desirable. Experience with SQL and NoSQL databases, including PostgreSQL, Amazon Redshift, Delta Lake, Iceberg and DynamoDB. In-depth knowledge of data architecture principles and best practices, especially in cloud environments. Proven experience with AWS services, including expertise in using AWS CLI, SDK, and Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or AWS CDK. Exceptional communication skills, capable of clearly articulating complex technical concepts to both technical and non-technical stakeholders. Demonstrated ability to quickly adapt to new tasks and roles in a dynamic environment. What You'll Do Strategic Data Engineering Leadership: Provide strategic vision and technical leadership in data engineering, guiding the development and execution of advanced data strategies that align with business objectives. Architect Data Solutions: Design and architect complex data pipelines and scalable architectures, leveraging advanced tools and frameworks (e.g., Apache Kafka, Kubernetes) to ensure optimal performance and reliability. Drive Innovation: Lead the exploration and adoption of new technologies and methodologies in data engineering, driving innovation and continuous improvement across data processes. Technical Expertise: Apply deep expertise in ETL processes, data modelling, and data warehousing to optimize data workflows and ensure data integrity and quality. Collaboration and Mentorship: Collaborate closely with cross-functional teams to understand requirements and deliver impactful data solutions—mentor and coach junior team members, fostering their growth and development in data engineering practices. Thought Leadership: Contribute to thought leadership in the data engineering domain through technical articles, conference presentations, and participation in industry forums. Some More About Us Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers. Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together! Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
Basavanagudi, Bengaluru, Karnataka
On-site
We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Data Engineer Experience: 3 - 7 Years Location: Gurugram/Pune/Bangalore Notice Period: Immediate to 30 days Job Description As a Data Engineer, you will work closely with a global hedge fund client on data engagements. You will partner with data strategy and sourcing teams to design scalable data pipelines and delivery architectures. The position requires strong coding skills in Python and SQL, including core programming and data manipulation; hands-on experience with cloud native platforms; and solid expertise in core data engineering concepts. Key Responsibilities Partner with data strategy and sourcing teams to define data requirements and design data pipelines Engage with vendors and technical teams to ingest, evaluate, and curate valuable data assets Collaborate with the core engineering team to develop scalable data processing and distribution capabilities Implement robust data quality checks to ensure the integrity of data deliveries Act as a subject matter expert on data asset offerings and collaborate with technical and non-technical stakeholders Essential Skills and Experience 5+ years of experience in data engineering, including data modeling and warehousing Proficient in Python and Snowflake for data ingestion and back-end manipulation Skilled in handling various data formats such as Parquet, AVRO, JSON, and XLSX Experience with web scraping and modifying scraping scripts Hands-on experience with FTP, SFTP, APIs, S3, and other data distribution methods Proficient with PySpark, Docker, and AWS cloud environments Understanding of financial modeling concepts Educational Qualification B.E./B.Tech in Computer Science or a related field Key Metrics Technologies: Python, SQL, Snowflake, PySpark, Docker Focus Areas: Data engineering, pipeline creation, and data delivery Behavioral Competencies Strong verbal and written communication skills Ability to manage and collaborate with client stakeholders effectively Skills & Requirements Python, SQL, AWS, Snowflake, PySpark, Docker, Data Modeling, Data Warehousing, Data Pipelines, Web Scraping, Parquet, Avro, JSON, XLSX, FTP, SFTP, API, S3, AWS Cloud, Financial Modeling, Data Ingestion, Data Manipulation, Data Quality, Client Stakeholder Management Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
UWorld is a worldwide leader in online test prep for college entrance, undergraduate, graduate, and professional licensing exams throughout the United States. Since 2003, over 2 million students have trusted us to help them prepare for high-stakes examinations. We are seeking a Senior Data Engineer who is passionate about creating an excellent user experience and enjoys taking on new challenges. The Data Engineer will be responsible for the design, development, testing, deployment, and support of our Data Analytics and the Data warehouse platform . Requirement Minimum Experience: Master's/bachelor’s degree in computer science or a related field. 6+ years of experience as a Data Engineer with experience in Data Analysis, ingestion, cleansing, validation, verification, and presentation (reports and dashboards). 4+ years of working knowledge and experience utilizing the following: Python, Spark/PySpark, Big Data Platforms (Data Bricks/Delta Lake), REST services, MS SQL Server/MySQL, MongoDB, and Azure Cloud. Experience with SQL, PL/SQL, and Relational Databases (MS SQL Server/MySQL/Oracle). Experience with Tableau/Power BI, NoSQL (MongoDB), and Kafka is a plus. Experience with REST API, Web Services, JSON, Build and Deployment pipelines (Maven, Ansible, Git), and Cloud environments (Azure, AWS, GCP) is desirable. Job Responsibilities: The software developer will perform the following duties: Understand data services and analytics needs across the organization and work on the data warehouse and reporting infrastructure to empower them with accurate information for decision-making. Develop and maintain a data warehouse that aggregates data from multiple content sources, including NoSQL DBs, RDBMS, Big Query, Salesforce, social media, other 3rd party web services (RESTful, JSON), flat-file stores, and application databases (OLTPs). Use Python, Spark/PySpark, Data Bricks, Delta Lake, SQL Server, Mongo DB, Jira, Git/Bit Bucket, Confluence, Data Bricks/Delta Lake, REST services, Tableau, Unix/Linux shell scripting, and Azure Cloud for data ingestion, processing, transformations, warehousing, and reporting. Develop scalable data pipelines using Data connectors, distributed processing transformations, schedulers, and data warehouse. Understanding of data structures, analytics, data modeling, and software architecture and applying this knowledge to problem solving. Develop, modify, and test algorithms that can be used in scripts to store, locate, cleanse, verify, validate, and retrieve specific documents, data, and information. Develop analytics to understand product sales, marketing impact, and application usage for UWorld products and applications. Employ best practices for code sharing and development to ensure common code base abstraction across all applications. Continuously be up-to-date on industry standard practices in big data and analytics and adopt solutions to the UWorld data warehousing platform. Work with QA engineers to ensure the quality and reliability of all reports, extracts, and dashboards by process of continuous improvement. Collaborate with technical architects, developers, subject matter experts, QA team, and customer care team to drive new enhancements or fix bugs promptly. Work in an agile environment such as Scrum. Soft Skills. Working proficiency and communication skills in verbal and written English. Excellent attention to detail and organization skills and ability to articulate ideas clearly and concisely. Ability to work effectively within a changing environment that is going through high growth. Exceptional follow-through, personal drive, and ability to understand direction and feedback. Positive attitude with a willingness to put aside ego for the sake of what is best for the team. Show more Show less
Posted 1 week ago
0.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Chennai, Tamil Nadu, India Qualification : Skills: Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Good to have : GCP Roles/Responsibilities: Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Writes unit/integration tests, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of frontend and end engineers, product managers, and analysts. Defines company data assets (data models), spark, sparkSQL, and hiveSQL jobs to populate data models. Designs data integrations and data quality framework. Basic Qualifications: BS or MS degree in Computer Science or a related technical field 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling 4+ years of experience with Big Data Technologies like Spark, Hive 2+ years of experience on data engineering on Google Cloud platform services like big query. Skills Required : Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Role : Skills: Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Good to have : GCP Roles/Responsibilities: Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Writes unit/integration tests, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of frontend and end engineers, product managers, and analysts. Defines company data assets (data models), spark, sparkSQL, and hiveSQL jobs to populate data models. Designs data integrations and data quality framework. Basic Qualifications: BS or MS degree in Computer Science or a related technical field 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling 4+ years of experience with Big Data Technologies like Spark, Hive 2+ years of experience on data engineering on Google Cloud platform services like big query. Experience : 4 to 7 years Job Reference Number : 12907
Posted 1 week ago
0.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Chennai, Tamil Nadu, India Qualification : Skills: 5+ years of experience with Java + Bigdata as minimum required skill . Java, Micorservices ,Sprintboot, API ,Bigdata-Hive, Spark,Pyspark Skills Required : Java ,Bigdata ,Spark Role : Skills: 5+ years of experience with Java + Bigdata as minimum required skill . Java, Micorservices ,Sprintboot, API ,Bigdata-Hive, Spark,Pyspark Experience : 5 to 7 years Job Reference Number : 13049
Posted 1 week ago
0.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida, Uttar Pradesh, India;Bangalore, Karnataka, India;Gurugram, Haryana, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India;Hyderabad, Telangana, India Qualification : Strong experience in Python 2+ years’ experience of working on feature/data pipelines using PySpark Understanding and experience around data science Exposure to AWS cloud services such as Sagemaker, Bedrock, Kendra etc. Experience with machine learning model lifecycle management tools, and an understanding of MLOps principles and best practice Experience with statistical models e.g., multinomial logistic regression Experience of technical architecture, design, deployment, and operational level knowledge Exploratory Data Analysis Knowledge around Model building, Hyperparameter tuning and Model performance metrics. Statistics Knowledge (Probability Distributions, Hypothesis Testing) Time series modelling, Forecasting, Image/Video Analytics, and Natural Language Processing (NLP). Good To Have: Experience researching and ing large language and Generative AI models. Experience with LangChain, LLAMAIndex, Foundation model tuning, Data Augmentation, and Performance Evaluation frameworks Able to provide analytical expertise in the process of model development, refining, and implementation in a variety of analytics problems. Knowledge on Docker and Kubernetes. Skills Required : Machine Learning, Natural Language Processing , AWS Sagemaker, Python Role : Generate actionable insights for business improvements. Ability to understand business requirements. Write clean, efficient, and reusable code following best practices. Troubleshoot and debug applications to ensure optimal performance. Write unit test cases Collaborate with cross-functional teams to define and deliver new features Use case derivation and solution creation from structured/unstructured data. Actively drive a culture of knowledge-building and sharing within the team Experience ing theoretical models in an applied environment. MLOps, Data Pipeline, Data engineering Statistics Knowledge (Probability Distributions, Hypothesis Testing) Experience : 4 to 5 years Job Reference Number : 13027
Posted 1 week ago
0.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India Qualification : 5-7 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Skills Required : Python, Pyspark, AWS Role : Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience : 8 to 10 years Job Reference Number : 13025
Posted 1 week ago
0.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Bangalore, Karnataka, India;Indore, Madhya Pradesh, India Qualification : 6-8 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Good to have: Skills Required : Python, pyspark, SQL Role : Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience : 6 to 8 years Job Reference Number : 13024
Posted 1 week ago
0.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Pune, Maharashtra, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India Qualification : Description Strong hands-on experience in Python Having good experience on Spark/Spark Structure Streaming. Experience of working on MSK (Kafka) Kinesis. Ability to design, build and unit test applications on Spark framework on Python. Exposure to AWS cloud services such as Glue/EMR, RDS, SNS, SQS, Lambda, Redshift etc. Good experience of writing SQL queries Strong technical development experience in effectively writing code, code reviews, and best practices Ability to solve complex data-driven scenarios and triage towards defects and production issues Ability to learn-unlearn-relearn concepts with an open and analytical mindset Skills Required : Pyspark, SQL Role : Work closely with business and product management teams to develop and implement analytics solutions. Collaborate with engineers & architects to implement and deploy scalable solutions. Actively drive a culture of knowledge-building and sharing within the team Able to quickly adapt and learn Able to jump into an ambiguous situation and take the lead on resolution Good To Have: Experience of working on MSK (Kafka), Amazon Elastic Kubernetes Service and Docker Exposure on GitHub Actions, Argo CD, Argo Workflows Experience of working on Databricks Experience : 4 to 6 years Job Reference Number : 12555
Posted 1 week ago
0.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida, Uttar Pradesh, India;Bangalore, Karnataka, India;Gurugram, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India Qualification : Do you love to work on bleeding-edge Big Data technologies, do you want to work with the best minds in the industry, and create high-performance scalable solutions? Do you want to be part of the team that is solutioning next-gen data platforms? Then this is the place for you. You want to architect and deliver solutions involving data engineering on a Petabyte scale of data, that solve complex business problems Impetus is looking for a Big Data Developer that loves solving complex problems, and architects and delivering scalable solutions across a full spectrum of technologies. Experience in providing technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Hive, etc. Should be able to communicate with the customer in the functional and technical aspects Expert-level proficiency in Python/Pyspark Hands-on experience with Shell/Bash Scripting (creating, and modifying scripting files) Control-M, AutoSys, Any job scheduler experience Experience in visualizing and evangelizing next-generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies). Should be able to guide the team for any functional and technical issues Strong technical development experience in effectively writing code, code reviews, and best practices code refactoring. Passionate for continuous learning, experimenting, ing and contributing towards cutting-edge open-source technologies and software paradigms Good communication, problem-solving & interpersonal skills. Self-starter & resourceful personality with the ability to manage pressure situations. Capable of providing the design and Architecture for typical business problems. Exposure and awareness of complete PDLC/SDLC. Out of box thinker and not just limited to the work done in the projects. Must Have Experience with AWS(EMR, Glue, S3, RDS, Redshift, Glue) Cloud Certification Skills Required : AWS, Pyspark, Spark Role : valuate and recommend the Big Data technology stack best suited for customer needs. Design/ Architect/ Implement various solutions arising out of high concurrency systems Responsible for timely and quality deliveries Anticipate on technological evolutions Ensure the technical directions and choices. Develop efficient ETL pipelines through spark or Hive. Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open-source technologies related to Big Data across multiple engagements Designing /architecting complex, highly available, distributed, failsafe compute systems dealing with a considerable amount (GB/TB) of data Identify and work on incorporating Non-functional requirements into the solution (Performance, scalability, monitoring etc.) Experience : 8 to 12 years Job Reference Number : 12400
Posted 1 week ago
0.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore, Karnataka, India;Gurgaon, Haryana, India;Indore, Madhya Pradesh, India Qualification : Job Title: Java + Bigdata Engineer Company Name: Impetus Technologies Job Description: Impetus Technologies is seeking a skilled Java + Bigdata Engineer to join our dynamic team. The ideal candidate will possess strong expertise in Java programming and have hands-on experience with Bigdata technologies. Responsibilities: Design, develop, and maintain robust big data applications using Java and related technologies. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Optimize application performance and scalability to handle large data sets effectively. Implement data processing solutions using frameworks such as Apache Hadoop, Apache Spark, or similar tools. Participate in code reviews, debugging, and troubleshooting of applications to ensure high-quality code standards. Stay updated with the latest trends and advancements in big data technologies and Java developments. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Strong proficiency in Java programming and experience with object-oriented design principles. Hands-on experience with big data technologies such as Hadoop, Spark, Kafka, or similar frameworks. Familiarity with cloud platforms and data storage solutions (AWS, Azure, etc.). Excellent problem-solving skills and a proactive approach to resolving technical challenges. Strong communication and interpersonal skills, with the ability to work collaboratively in a team-oriented environment. At Impetus Technologies, we value innovation and encourage our employees to push boundaries. If you are a passionate Java + Bigdata Engineer looking to take your career to the next level, we invite you to and be part of our growing team. Skills Required : Java, spark, pyspark, Hive, microservices Role : Job Title: Java + Bigdata Engineer Company Name: Impetus Technologies Roles and Responsibilities: Design, develop, and maintain scalable applications using Java and Big Data technologies. Collaborate with cross-functional teams to gather requirements and understand project specifications. Implement data processing and analytics solutions leveraging frameworks such as Apache Hadoop, Apache Spark, and others. Optimize application performance and ensure data integrity throughout the data lifecycle. Conduct code reviews and implement best practices to enhance code quality and maintainability. Troubleshoot and resolve issues related to application performance and data processing. Develop and maintain technical documentation related to application architecture, design, and deployment. Stay updated with industry trends and emerging technologies in Java and Big Data ecosystems. Participate in Agile development processes including sprint planning, log grooming, and daily stand-ups. Mentor junior engineers and provide technical guidance to ensure successful project delivery. Experience : 4 to 7 years Job Reference Number : 13044
Posted 1 week ago
0.0 - 18.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bengaluru, Karnataka, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India;Hyderabad, Telangana, India Qualification : Overall 10-18 yrs. of Data Engineering experience with Minimum 4+ years of hands on experience in Databricks. Ready to travel Onsite and work at client location. Proven hands-on experience as a Databricks Architect or similar role with a deep understanding of the Databricks platform and its capabilities. Analyze business requirements and translate them into technical specifications for data pipelines, data lakes, and analytical processes on the Databricks platform. Design and architect end-to-end data solutions, including data ingestion, storage, transformation, and presentation layers, to meet business needs and performance requirements. Lead the setup, configuration, and optimization of Databricks clusters, workspaces, and jobs to ensure the platform operates efficiently and meets performance benchmarks. Manage access controls and security configurations to ensure data privacy and compliance. Design and implement data integration processes, ETL workflows, and data pipelines to extract, transform, and load data from various sources into the Databricks platform. Optimize ETL processes to achieve high data quality and reduce latency. Monitor and optimize query performance and overall platform performance to ensure efficient execution of analytical queries and data processing jobs. Identify and resolve performance bottlenecks in the Databricks environment. Establish and enforce best practices, standards, and guidelines for Databricks development, ensuring data quality, consistency, and maintainability. Implement data governance and data lineage processes to ensure data accuracy and traceability. Mentor and train team members on Databricks best practices, features, and capabilities. Conduct knowledge-sharing sessions and workshops to foster a data-driven culture within the organization. Will be responsible for Databricks Practice Technical/Partnership initiatives. Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects. Skills Required : Databricks, Unity Catalog, Pyspark, ETL, SQL, Delta Live Tables Role : Bachelor's or Master’s degree in Computer Science, Information Technology, or related field. In depth hands-on implementation knowledge on Databricks. Delta Lake, Delta table - Managing Delta Tables, Databricks Cluster Configuration, Cluster policies. Experience handling structured and unstructured datasets Strong proficiency in programming languages like Python, Scala, or SQL. Experience with Cloud platforms like AWS, Azure, or Google Cloud, and understanding of cloud-based data storage and computing services. Familiarity with big data technologies like Apache Spark, Hadoop, and data lake architectures. Develop and maintain data pipelines, ETL workflows, and analytical processes on the Databricks platform. Should have good experience in Data Engineering in Databricks Batch process and Streaming Should have good experience in creating Workflows & Scheduling the pipelines. Should have good exposure on how to make packages or libraries available in DB. Familiarity in Databricks default runtimes Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Experience : 10 to 18 years Job Reference Number : 12932
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Data Scientist II Bangalore, Karnataka, India Date posted Jun 09, 2025 Job number 1828092 Work site Microsoft on-site only Travel 0-25 % Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Data Science Employment type Full-Time Overview Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Cloud App and Identity Research (CAIR) team is leading the security research of Microsoft Defender for Cloud Apps. We are working on the edge technology of AI and Cloud. Researchers in the team are world class experts in cloud related threats, they are talented and enthusiastic employees. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond Qualifications 5+ years of programming language experience like C/C++/C#/Python required and hands on experience in using technologies such as Spark, Azure ML, SQL, KQL, Databricks, etc. Able to prepare data pipelines and feature engineering pipelines to build robust models using SQL, PySpark, Azure Data Studio etc. Knowledge of Classification, Prediction, Anomaly Detection, Optimization, Graph ML, NLP · Candidate must be comfortable in manipulating and analyzing complex, high dimensional data from various sources to solve difficult problems. Knowledge of working in cloud-computing environment like Azure / AWS / Google Cloud. · Proficient in Relational Databases (SQL), Big Data Technologies (PySpark). Azure storage technologies such as ADLS, cosmos DB, etc. Generative AI experience is a plus · Bachelor's or higher degrees in Computer Science, Statistics, Mathematics, Engineering, or related disciplines. Responsibilities Build algorithms and innovative methods to discover and defend real world sophisticated cloud-based attacks in SaaS ecosystem. Collaborate with other data scientists to develop machine learning systems for detecting anomalies, compromises, fraud, and non-human identity cyber-attacks using both Gen AI and graph-based systems. Identify, integrate multiple data sources, or types of data, and develop expertise with multiple data sources to tell a story,identify new patterns and business opportunities, and communicate visually and verbally with clear and compelling data-driven stories. Analyze extensive datasets and develop a robust, scalable feature engineering pipeline within a PySpark-based environment. · Acquires and uses broad knowledge of innovative methods, algorithms, and tools from within Microsoft and from the scientific literature and applies his or her own analysis of scalability and applicability to the formulated problem. Work across Threat Researchers, engineering, and product teams to enable metrics for product success. Contribute to active engagement with the security ecosystem through Research papers, presentations, and blogs. Provide subject matter expertise to customers based on industry attack trends and product capabilities. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Information Number of Positions 1 Industry Engineering Date Opened 06/09/2025 Job Type Permanent Work Experience 2-3 years City Bangalore State/Province Karnataka Country India Zip/Postal Code 560037 Location Bangalore About Us CloudifyOps is a company with DevOps and Cloud in our DNA. CloudifyOps enables businesses to become more agile and innovative through a comprehensive portfolio of services that addresses hybrid IT transformation, Cloud transformation, and end-to-end DevOps Workflows. We are a proud Advance Partner of Amazon Web Services and have deep expertise in Microsoft Azure and Google Cloud Platform solutions. We are passionate about what we do. The novelty and the excitement of helping our customers accomplish their goals drives us to become excellent at what we do. Job Description Culture at CloudifyOps : Working at CloudifyOps is a rewarding experience! Great people, a work environment that thrives on creativity, and the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. About the Role: We are seeking a proactive and technically skilled AI/ML Engineer with 2–3 years of experience to join our growing technology team. The ideal candidate will have hands-on expertise in AWS-based machine learning, Agentic AI, and Generative AI tools, especially within the Amazon AI ecosystem. You will play a key role in building intelligent, scalable solutions that address complex business challenges. Key Responsibilities: 1. AWS-Based Machine Learning Develop, train, and fine-tune ML models on AWS SageMaker, Bedrock, and EC2. Implement serverless ML workflows using Lambda, Step Functions, and EventBridge. Optimize models for cost/performance using AWS Inferentia/Trainium. 2. MLOps & Productionization Build CI/CD pipelines for ML using AWS SageMaker Pipelines, MLflow, or Kubeflow. Containerize models with Docker and deploy via AWS EKS/ECS/Fargate. Monitor models in production using AWS CloudWatch, SageMaker Model Monitor. 3. Agentic AI Development Design autonomous agent systems (e.g., AutoGPT, BabyAGI) for task automation. Integrate multi-agent frameworks (LangChain, AutoGen) with AWS services. Implement RAG (Retrieval-Augmented Generation) for agent knowledge enhancement. 4. Generative AI & LLMs Fine-tune and deploy LLMs (GPT-4, Claude, Llama 2/3) using LoRA/QLoRA. Build Generative AI apps (chatbots, content generators) with LangChain, LlamaIndex. Optimize prompts and evaluate LLM performance using AWS Bedrock/Amazon Titan. 5. Collaboration & Innovation Work with cross-functional teams to translate business needs into AI solutions. Collaborate with DevOps and Cloud Engineering teams to develop scalable, production-ready AI systems. Stay updated with cutting-edge AI research (arXiv, NeurIPS, ICML). 5. Governance & Documentation Implement model governance frameworks to ensure ethical AI/ML deployments. Design reproducible ML pipelines following MLOps best practices (versioning, testing, monitoring). Maintain detailed documentation for models, APIs, and workflows (Markdown, Sphinx, ReadTheDocs). Create runbooks for model deployment, troubleshooting, and scaling. Technical Skills Programming: Python (PyTorch, TensorFlow, Hugging Face Transformers). AWS: SageMaker, Lambda, ECS/EKS, Bedrock, S3, IAM. MLOps: MLflow, Kubeflow, Docker, GitHub Actions/GitLab CI. Generative AI: Prompt engineering, LLM fine-tuning, RAG, LangChain. Agentic AI: AutoGPT, BabyAGI, multi-agent orchestration. Data Engineering: SQL, PySpark, AWS Glue/EMR. Soft Skills Strong problem-solving and analytical thinking. Ability to explain complex AI concepts to non-technical stakeholders. What We’re Looking For Bachelor’s/Master’s in CS, AI, Data Science, or related field. 2-3 years of industry experience in AI/ML engineering. Portfolio of deployed ML/AI projects (GitHub, blog, case studies). Good to have an AWS Certified Machine Learning Specialty certification. Why Join Us? Innovative Projects: Work on cutting-edge AI applications that push the boundaries of technology. Collaborative Environment: Join a team of passionate engineers and researchers committed to excellence. Career Growth: Opportunities for professional development and advancement in the rapidly evolving field of AI. Equal opportunity employer CloudifyOps is proud to be an equal opportunity employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, color, sex, religion, national origin, disability, pregnancy, marital status, sexual orientation, gender reassignment, veteran status, or other protected category.
Posted 1 week ago
3.0 years
0 Lacs
India
Remote
Title: Azure Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Skills: Python, Django, Lambda, PySpark, CRON, MySQL, Amazon Web Services (AWS), We are hiring a Python Developer for one of our clients. Job Title: Python Developer Experience: 5+ Years Job Type: 6 Months Contract + ext Location: Bangalore (Hybrid) Notice Period: Immediate Joiner Only Job Description Python development, backend experience. Strong knowledge of AWS services (Glue, Lambda, DynamoDB, S3, PySpark). Excellent debugging skills to resolve production issues. Experience with MySQL, NoSQL databases. Optional Skills Experience with Django and CRON jobs. Familiarity with data lakes, big data tools, and CI/CD. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Skills: Python, SQL, Django, AWS Lambda, CRON jobs, CI/CD, data lakes, Job Title: Python Developer Experience: 5+ Years Job Type: 6 Months Contract + ext Location: Bangalore (Hybrid) Notice Period: Immediate Joiner Only Job Description Python development, backend experience. Strong knowledge of AWS services (Glue, Lambda, DynamoDB, S3, PySpark). Excellent debugging skills to resolve production issues. Experience with MySQL, NoSQL databases. Optional Skills Experience with Django and CRON jobs. Familiarity with data lakes, big data tools, and CI/CD. If Interested can share your resume at heena@aliqan.com Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Skills: Python Development, Django, CRON, MySQL, NoSQL, Python, SQL, Glue, We are hiring a Python Developer for one of our clients. Job Title: Python Developer Experience: 5+ Years Job Type: 6 Months Contract + ext Location: Bangalore (Hybrid) Notice Period: Immediate Joiner Only Job Description Python development, backend experience. Strong knowledge of AWS services (Glue, Lambda, DynamoDB, S3, PySpark). Excellent debugging skills to resolve production issues. Experience with MySQL, NoSQL databases. Optional Skills Experience with Django and CRON jobs. Familiarity with data lakes, big data tools, and CI/CD. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2