Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
18 - 25 Lacs
Pune
Work from Office
We are seeking a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes. Responsibilities: Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness. Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines. Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS. Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs. Implement data quality checks and monitoring to ensure data integrity and identify potential issues. Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes. Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum and emerging technologies in data engineering. Contribute to the development and enhancement of our data warehouse architecture Requirements Mandatory: Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes. At least 3+ years of exp in Snowflake data warehousing technologies. At least 3+ years of exp in creating and maintaining Airflow ETL pipelines. Minimum 3+ years of professional level experience with Python languages for data manipulation and automation. Working experience with Elastic Search and its application in data pipelines. Proficiency in SQL and experience with data modelling techniques. Strong understanding of cloud-based data storage solutions such as AWS S3. Experience working with NFS and other file storage systems. Excellent problem-solving and analytical skills. Strong communication and collaboration skills.
Posted 1 month ago
4.0 - 8.0 years
10 - 20 Lacs
Hyderabad, Chennai
Work from Office
Roles & Responsibilities : • We are looking for a strong Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . • Integration of data from multiple sources or vendors to provide the holistic insights from data. • You are expected to build and manage Data Lake and Data warehouse solutions, design data models, create ETL processes, implementing data quality mechanisms etc. • Perform EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. • Should have experience in client interaction oral and written. • Experience in mentoring juniors and providing required guidance to the team. Required Technical Skills • Extensive experience in languages such as Python, Pyspark, SQL (basics and advanced). • Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Data Architecture . • Must be proficient in Redshift, Azure Data Factory, Snowflake etc. • Hands-on experience in cloud services like AWS S3, Glue, Lambda, CloudWatch, Athena etc. • Good to have knowledge in Dataiku, Big Data Technologies and basic knowledge of BI tools like Power BI, Tableau etc will be plus. • Sound knowledge in Data management, data operations, data quality and data governance. • Knowledge of SFDC, Waterfall/ Agile methodology. • Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications • Bachelors or masters Engineering/ MCA or equivalent degree. • 4-6 years of relevant industry experience as Data Engineer . • Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales, Open Data etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Preferably Hyderabad/ Chennai, India
Posted 1 month ago
6.0 - 8.0 years
8 - 12 Lacs
Gurugram
Hybrid
Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop
Posted 1 month ago
6.0 - 8.0 years
8 - 12 Lacs
Hyderabad
Hybrid
Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop
Posted 1 month ago
6.0 - 11.0 years
17 - 30 Lacs
Kolkata, Hyderabad/Secunderabad, Bangalore/Bengaluru
Hybrid
Inviting applications for the role of Lead Consultant- Snowflake Data Engineer( Snowflake+Python+Cloud)! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight, Steamlit Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python or Pyspark. Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/PySpark, AWS/Azure, ETL concepts, & Data Warehousing concepts
Posted 1 month ago
6.0 - 11.0 years
17 - 30 Lacs
Kolkata, Hyderabad/Secunderabad, Bangalore/Bengaluru
Hybrid
Inviting applications for the role of Lead Consultant- Snowflake Data Engineer( Snowflake+Python+Cloud)! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight, Steamlit Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python or Pyspark. Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/PySpark, AWS/Azure, ETL concepts, & Data Warehousing concepts
Posted 1 month ago
4.0 - 9.0 years
22 - 27 Lacs
Bengaluru
Work from Office
We are looking for a skilled Veeam Backup Administrator to manage and maintain backup, replication, and disaster recovery solutions using Veeam Backup & Replication The ideal candidate should have hands-on experience configuring backup solutions across on-premise and cloud environments, with a focus on automation, reporting, and BCP/DR planning Key Responsibilities:Manage and configure Veeam Backup & Replication infrastructureSchedule and monitor backup, backup copy, and replication jobsSet up backup and copy jobs from on-prem to AWS S3Configure and manage Veeam ONE for performance monitoring and reportingAutomate and schedule reports for backup and replication job statusesConfigure Veeam Enterprise Manager for centralized backup administrationSet up tape backups within Veeam environmentImplement Immutable repositories for enhanced data securityConfigure storage snapshots in DD Boost and Unity storageDesign and execute BCP/DR strategies and perform server-level testing for recovery readiness Required Skills:Hands-on experience with Veeam Backup & ReplicationProficiency in Veeam ONE, Enterprise Manager, and tape backup configurationExperience in backup to cloud storage (AWS S3)Strong understanding of Immutable backups and snapshot technologyKnowledge of DD Boost, Unity storage, and storage replicationExperience in BCP/DR planning and executionGood troubleshooting and documentation skillsTechnical Key Skills: Veeam Backup, Replication, AWS S3, Veeam ONE, Enterprise Manager, Tape Backup, Immutable Backup, DD Boost, Unity Storage, BCP/DR
Posted 1 month ago
5.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Strong experience with Python, SQL, pySpark, AWS Glue. Good to have - Shell Scripting, Kafka Good knowledge of DevOps pipeline usage (Jenkins, Bitbucket, EKS, Lightspeed) Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Orchestration using Airflow Good to have - Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming Good debugging skills Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehouse architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Experience in Insurance domain preferred.
Posted 1 month ago
6.0 - 8.0 years
10 - 20 Lacs
Noida, Hyderabad, Pune
Work from Office
3-4 Years hands-on experience with Snowflake database Strong SQL, PL/SQL, and Snowflake functionality experience Strong exposure Oracle, SQL server, etc. Exposure to cloud storage services like AWS S3 2-3 years Informatica PowerCenter
Posted 1 month ago
4.0 - 8.0 years
5 - 15 Lacs
Pune
Hybrid
Databuzz is Hiring for Python developer-4+yrs-(Pune)-Hybrid Please mail your profile to haritha.jaddu@databuzzltd.com with the below details, If you are Interested. About DatabuzzLTD: Databuzz is One stop shop for data analytics specialized in Data Science, Big Data, Data Engineering, AI & ML, Cloud Infrastructure and Devops. We are an MNC based in both UK and INDIA. We are a ISO 27001 & GDPR complaint company. CTC - ECTC - Notice Period/LWD - (Candidate serving notice period will be preferred) Position: Python developer Location: Pune Exp -4+ yrs Mandatory skills : Candidate should have 4-8 years of Python web development experience Should have good knowledge of AWS serverless service like AWS Lambda, AWS S3 ,AWS Step Functions Should have good working experience on Flask, NumPy, pandas , json ,unit test , mongo ,sql Hands on experience in AWS cloud with understanding of various cloud services and offering for development and deployment of applications Good experience in Amazon RDS, MongoDB and PostgreSQL Should have good security fundamentals knowledge Should be able to code the AWS platform as a service using Terraform Regards, Haritha Talent Acquisition specialist haritha.jaddu@databuzzltd.com
Posted 1 month ago
4.0 - 9.0 years
12 - 22 Lacs
Hyderabad, Chennai
Work from Office
Interested can also apply with Sanjeevan Natarajan sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 1 month ago
5.0 - 7.0 years
18 - 20 Lacs
Hyderabad, Bengaluru
Hybrid
Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with MUST 4+ YEARS hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark, Python, and working with modern data engineering tools in cloud environments such as AWS. Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments.
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Pune, Bengaluru, Delhi / NCR
Work from Office
Job Summary: We are seeking an experienced Informatica Developer with a strong background in data integration, cloud data platforms, and modern ETL tools. The ideal candidate will have hands-on expertise in Informatica Intelligent Cloud Services (IICS/CDI/IDMC) , Snowflake , and cloud storage platforms such as AWS S3. You will be responsible for building scalable data pipelines, designing integration solutions, and resolving complex data issues across cloud and on-premises environments. Key Responsibilities: Design, develop, and maintain robust data integration pipelines using Informatica PowerCenter and Informatica CDI/IDMC . Create and optimize mappings and workflows to load data into Snowflake , ensuring performance and accuracy. Develop and manage shell scripts to automate data processing and integration workflows. Implement data exchange processes between Snowflake and external systems, including AWS S3 . Write complex SQL and SnowSQL queries for data validation, transformation, and reporting. Collaborate with business and technical teams to gather requirements and deliver integration solutions. Troubleshoot and resolve performance, data quality, and integration issues in a timely manner. Work on integrations with third-party applications like Salesforce and NetSuite (preferred). Required Skills and Qualifications: 5+ years of hands-on experience in Informatica PowerCenter and Informatica CDI / IDMC . Minimum 3 - 4 years of experience with Snowflake database and SnowSQL commands. Strong SQL development skills. Solid experience with AWS S3 and understanding of cloud data integration architecture. Proficiency in Unix/Linux Shell Scripting . Ability to independently design and implement end-to-end ETL workflows. Strong problem-solving skills and attention to detail. Experience working in Agile/Scrum environments. Preferred Qualifications (Nice to Have): Experience integrating with Salesforce and/or NetSuite using Informatica. Knowledge of cloud platforms like AWS , Azure , or GCP . Informatica certification(s) or Snowflake certifications.
Posted 1 month ago
4.0 - 9.0 years
15 - 25 Lacs
Hyderabad, Chennai
Work from Office
Interested can also apply with sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 1 month ago
3.0 - 6.0 years
5 - 15 Lacs
Bengaluru
Hybrid
Databuzz is Hiring for Python developer(NSO)-3+yrs-(Bangalore)-Hybrid Please mail your profile to haritha.jaddu@databuzzltd.com with the below details, If you are Interested. About DatabuzzLTD: Databuzz is One stop shop for data analytics specialized in Data Science, Big Data, Data Engineering, AI & ML, Cloud Infrastructure and Devops. We are an MNC based in both UK and INDIA. We are a ISO 27001 & GDPR complaint company. CTC - ECTC - Notice Period/LWD - (Candidate serving notice period will be preferred) Position: Python developer(NSO) Location: Bangalore Exp -3+ yrs Mandatory skills : Candidate should have 3-5 years of experience in core python, Django Should have experience in NSO Should have experience in AWS RDS, AWS S3 , Aws Step Functions Should have experience in Docker, Dynamo DB, Microservices Regards, Haritha Talent Acquisition specialist haritha.jaddu@databuzzltd.com
Posted 1 month ago
8.0 - 13.0 years
10 - 15 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role: NODE-AWS Developer Experience: 8-15 Years Location: Pan India Notice Period: 15 Days - Immediate Joiners Preferred Job Description We are seeking a highly skilled and experienced NODE-AWS Developer to join our team. The ideal candidate will have a strong background in web development, particularly with Node.js and extensive experience with AWS cloud services. Key Responsibilities: Designing, developing, and deploying enterprise-level, multi-tiered, and service-oriented applications using Node.js and TypeScript. Working extensively with AWS technologies including API Gateway, Lambda, RDS, S3, Step Functions, SNS, SQS, DynamoDB, Cloudwatch, and Cloudwatch Insights. Implementing serverless architectures and Infrastructure as Code (IaC) using AWS CDK or similar technologies. Applying strong knowledge of database design and data modeling principles for both relational and non-relational databases. Participating in code reviews, adhering to coding standards, and promoting a Shift Left mindset to ensure high code quality. Developing and enhancing unit tests using relevant frameworks. Collaborating effectively in a distributed and agile environment, utilizing tools like Jira and Bitbucket. Articulating architecture and design decisions clearly and comprehensively. Mandatory Skills: Node.js (minimum 4 years of dedicated experience) TypeScript, JavaScript AWS API Gateway AWS Lambda AWS SQS, AWS SNS AWS S3 DynamoDB AWS Cloudwatch, Cloudwatch Insights Serverless Architectures Microservices Docker Experience with unit test frameworks Database interactions (relational and non-relational) Experience with code reviews and coding standards
Posted 1 month ago
5.0 - 7.0 years
20 - 25 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Responsibilities: Design and Development: Develop robust, scalable, and maintainable backend services using Python frameworks like Django, Flask, and FastAPI. Cloud Infrastructure: Work with AWS services (e.g., Cloudwatch, S3, RDS, Neptune, Lambda, ECS) to deploy, manage, and optimize our cloud infrastructure. Software Architecture:? Participate in defining and implementing software architecture best practices, including design patterns, coding standards, and testing methodologies. Database Management:? Proficiently work with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., DynamoDB, Neptune) to design and optimize data models and queries.? Experience with ORM tools. Automation: Design, develop, and maintain automation scripts (primarily in Python) for various tasks, including: Data updates and processing. Scheduling cron jobs. Integrating with communication platforms like Slack and Microsoft Teams for notifications and updates. Implementing business logic through automated scripts. Monitoring and Logging: Implement and manage monitoring and logging solutions using tools like ELK stack (Elasticsearch, Logstash, Kibana) and AWS CloudWatch. Production Support:? Participate in on-call rotations and provide support for production systems, troubleshooting issues and implementing fixes.? Proactively identify and address potential production issues. Team Leadership and Mentorship: Lead and mentor junior backend developers, providing technical guidance, code reviews, and support their professional growth. Required Skills and Experience: 5+ years of experience in backend software development. Strong proficiency in Python and at least two of the following frameworks: Django, Flask, FastAPI. Hands-on experience with AWS cloud services, including ECS. Experience with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., DynamoDB, Neptune). Strong experience with monitoring and logging tools, specifically ELK stack and AWS CloudWatch. Locations : Mumbai, Delhi NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Work Timings: 2:30PM-11:30PM(Monday-Friday)
Posted 2 months ago
3.0 - 8.0 years
10 - 20 Lacs
Chennai
Hybrid
Roles & Responsibilities : • We are looking for a strong Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . • Integration of data from multiple sources or vendors to provide the holistic insights from data. • You are expected to build and manage Data Lake and Data warehouse solutions, design data models, create ETL processes, implementing data quality mechanisms etc. • Perform EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. • Should have experience in client interaction oral and written. • Experience in mentoring juniors and providing required guidance to the team. Required Technical Skills • Extensive experience in languages such as Python, Pyspark, SQL (basics and advanced). • Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Data Architecture . • Must be proficient in Redshift, Azure Data Factory, Snowflake etc. • Hands-on experience in cloud services like AWS S3, Glue, Lambda, CloudWatch, Athena etc. • Good to have knowledge in Dataiku, Big Data Technologies and basic knowledge of BI tools like Power BI, Tableau etc will be plus. • Sound knowledge in Data management, data operations, data quality and data governance. • Knowledge of SFDC, Waterfall/ Agile methodology. • Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications • Bachelors or masters Engineering/ MCA or equivalent degree. • 4-6 years of relevant industry experience as Data Engineer . • Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales, Open Data etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Chennai, India
Posted 2 months ago
5.0 - 10.0 years
20 - 35 Lacs
Chennai
Work from Office
Development: Design, build, and maintain robust, scalable, and high-performance data pipelines to ingest, process, and store large volumes of structured and unstructured data. Utilize Apache Spark within Databricks to process big data efficiently, leveraging distributed computing to process large datasets in parallel. Integrate data from a variety of internal and external sources, including databases, APIs, cloud storage, and real-time streaming data. Data Integration & Storage: Implement and maintain data lakes and warehouses, using technologies like Databricks, Azure Synapse, Redshift, BigQuery to store and retrieve data. Design and implement data models, schemas, and architecture for efficient querying and storage. Data Transformation & Optimization: Leverage Databricks and Apache Spark to perform data transformations at scale, ensuring data is cleaned, transformed, and optimized for analytics. Write and optimize Spark SQL, PySpark, and Scala code to process large datasets in real-time and batch jobs. Work on ETL processes to extract, transform, and load data from various sources into cloud-based data environments. Big Data Tools & Technologies: Utilize cloud-based big data platforms (e.g., AWS, Azure, Google Cloud) in conjunction with Databricks for distributed data processing and storage. Implement and maintain data pipelines using Apache Kafka, Apache Flink, and other data streaming technologies for real-time data processing. Collaboration & Stakeholder Engagement: Work with data scientists, data analysts, and business stakeholders to define data requirements and deliver solutions that align with business objectives. Collaborate with cloud engineers, data architects, and other teams to ensure smooth integration and data flow between systems. Monitoring & Automation: Build and implement monitoring solutions for data pipelines, ensuring consistent performance, identifying issues, and optimizing workflows. Automate data ingestion, transformation, and validation processes to reduce manual intervention and increase efficiency. Document data pipeline processes, architectures, and data models to ensure clarity and maintainability. Adhere to best practices in data engineering, software development, version control, and code review. Required Skills & Qualifications: Education: Bachelors degree in Computer Science, Engineering, Data Science, or a related field (or equivalent experience). Technical Skills: Apache Spark: Strong hands-on experience with Spark, specifically within Databricks (PySpark, Scala, Spark SQL). Experience working with cloud-based platforms such as AWS, Azure, or Google Cloud, particularly in the context of big data processing and storage. Proficiency in SQL and experience with cloud data warehouses (e.g., Redshift, BigQuery, Snowflake). Strong programming skills in Python, Scala, or Java. Big Data & Cloud Technologies: Experience with distributed computing concepts and scalable data processing architectures. Familiarity with data lake architectures and frameworks (e.g., AWS S3, Azure Data Lake). Data Engineering Concepts: Strong understanding of ETL processes, data modeling, and database design. Experience with batch and real-time data processing techniques. Familiarity with data quality, data governance, and privacy regulations. Problem Solving & Analytical Skills: Strong troubleshooting skills for resolving issues in data pipelines and performance optimization. Ability to work with large, complex datasets, and perform data wrangling and cleaning.
Posted 2 months ago
8.0 - 12.0 years
18 - 22 Lacs
Chennai
Work from Office
Job Summary We are seeking an experienced Senior Data Testing Lead to manage and guide our Data Testing team. The ideal candidate will bring strong expertise in validating complex data pipelines across heterogeneous systems such as Oracle, PostgreSQL, S3, Apache Iceberg, Redshift, and Qlik. This role demands ownership of the testing lifecycle, from test planning to execution and team leadership. Key Responsibilities Lead the end-to-end data testing efforts across multiple data sources and destinations. Manage and mentor a team of junior data testers, ensuring high-quality deliverables and continuous skill development. Define and implement test strategies, plans, and test cases for: - Data movement from Oracle/PostgreSQL to AWS S3 - Data transformation and ingestion from S3 to Iceberg to Redshift - Data reporting flow from Redshift to Qlik Perform data validations and quality checks, ensuring data accuracy, completeness, and integrity. Collaborate with data engineers, architects, and product owners to understand requirements and provide testing inputs. Maintain detailed test documentation and participate in code/data reviews. Set up and maintain test automation frameworks and reusable test scripts for data validation (optional but preferred). Identify and mitigate data quality risks proactively. Required Skills and Experience 812 years of overall experience with at least 3+ years in leading data testing teams. Strong SQL skills for data profiling, validation, and test scripting. Solid understanding of ETL/ELT testing methodologies. Experience with AWS S3, Apache Iceberg, Redshift, and Qlik is highly preferred. Proven experience in testing data flows from RDBMS (Oracle, PostgreSQL) to cloud data lakes/warehouses. Ability to work independently and manage multiple priorities in a fast-paced environment. Excellent leadership, communication, and stakeholder management skills. Good to Have Familiarity with data test automation tools like DBT, Great Expectations, or custom Python frameworks. Exposure to domain knowledge (e.g., Insurance, Finance, or Healthcare) is a plus. Experience working in Agile/Scrum teams. We are looking for candidates who can join in a months time only.
Posted 2 months ago
8.0 - 12.0 years
30 - 45 Lacs
Bengaluru
Work from Office
We encourage you to apply if you have a passion for innovation, a strong technical background, and the ability to lead a team toward impactful, data-driven solutions About iSOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is the Unified Marketing, Advertising, and Data Decision Intelligence Platform purpose-built to deliver speed to value for marketers. At MADTECH.AI, we make real-time AI-driven insights accessible to everyone. Whether youre a global or emerging brand, agency, publisher, or data/tech provider, we give you a single source of truth - so you can capture sharper insights that drive better marketing decisions faster and more affordable than ever before. MADTECH.AI unifies and transforms MADTECH data and centralizes decision intelligence in a single, affordable platform. Leave data wrangling, data model building, proactive problem solving, and data visualization to MADTECH.AI. Job Description: iSOCRATES is seeking a highly skilled and experienced Lead Data Scientist to spearhead our growing Data Science team. The Lead Data Scientist will be responsible for leading the team that defines, designs, reports on, and analyzes audience, campaign, and programmatic media trading data. This includes working with selected partner-focused Managed Services and Outsourced Services on behalf of our supply-side and demand-side partners. The role will involve collaboration with cross-functional teams and working across a variety of media channels, including digital and offline channels such as display, mobile, video, social, native, and advanced TV/Audio ad products. Key Responsibilities: 1. Team Leadership & Management: Lead and mentor a team of data scientists to drive the design, development, and implementation of data-driven solutions for media and marketing campaigns. 2. Advanced Analytics & Data Science Expertise: Provide hands-on leadership in applying rigorous statistical, econometric, and Big Data methods to define requirements, design analytics solutions, analyze results, and optimize economic outcomes. Expertise in modeling techniques including propensity modeling, Media Mix Modeling (MMM), Multi-Touch Attribution (MTA), Recency, Frequency, Monetary (RFM) analysis, Bayesian statistics, and non-parametric methods. 3. Generative AI & NLP: Lead the implementation and development of Generative AI, Large Language Models, and Natural Language Processing (NLP) techniques to enhance data modeling, prediction, and analysis processes. 4. Data Architecture & Management: Architect and manage dynamic data systems from diverse sources, ensuring effective integration and optimization of audience, pricing, and contextual data for programmatic and digital advertising campaigns. Oversee the management of DSPs, SSPs, DMPs, and other data systems integral to the ad-tech ecosystem. 5. Cross-Functional Collaboration: Work closely with Product, System Development, Yield, Operations, Finance, Sales, Business Development, and other teams to ensure seamless data quality, completeness, and predictive outcomes across campaigns. Design and deliver actionable insights, creating innovative, data-driven solutions and reporting tools for use by both iSOCRATES teams and business partners. 6. Predictive Modeling & Optimization: Lead the development of predictive models and analyses to drive programmatic optimization, focusing on revenue, audience behavior, bid actions, and ad inventory optimization (eCPM, fill rate, etc.). Monitor and analyze campaign performance, making data-driven recommendations for optimizations across various media channels including websites, mobile apps, and social media platforms. 7. Data Collection & Quality Assurance: Oversee the design, collection, and management of data, ensuring high-quality standards, efficient storage systems, and optimizations for in-depth analysis and visualization. Guide the implementation of tools for complex data analysis, model development, reporting, and visualization, ensuring alignment with business objectives. Qualifications: Masters or Ph.D. in Statistics, Engineering, Science, or Business with a strong foundation in mathematics and statistics. Looking for an experience of 8 to 10 years with at least 5 years of hands-on experience in data science, predictive analytics, media research, and digital analytics, with a focus on modeling, analysis, and optimization within the media, advertising, or tech industry. At least 3 years of hands-on experience with Generative AI, Large Language Models, and Natural Language Processing techniques. Minimum 3 years of experience in Publisher and Advertiser Audience Data Analytics and Modeling. Proficient in data collection, business intelligence, machine learning, and deep learning techniques using tools such as Python, R, scikit-learn, Hadoop, Spark, MySQL, and AWS S3. Expertise in logistic regression, customer segmentation, persona building, and predictive analytics. Strong analytical and data modeling skills with a deep understanding of audience behavior, pricing strategies, and programmatic media optimization. Experience working with DSPs, SSPs, DMPs, and programmatic systems. Excellent communication and presentation skills, with the ability to communicate complex technical concepts to non-technical stakeholders. Ability to manage multiple tasks and projects effectively, both independently and in collaboration with remote teams. Strong problem-solving skills with the ability to adapt to evolving business needs and deliver solutions proactively. Experience in developing analytics dashboards, visualization tools, and reporting systems. Background in digital media optimization, audience segmentation, and performance analytics. This is an exciting opportunity to take on a leadership role at the forefront of data science in the digital media and advertising space. If you have a passion for innovation, a strong technical background, and the ability to lead a team toward impactful, data-driven solutions, we encourage you to apply. An interest and ability to work in a fast-paced operation on the analytics and revenue side of our business
Posted 2 months ago
5.0 - 7.0 years
8 - 10 Lacs
Hyderabad
Work from Office
Role & Responsibilities: Design and develop integrations and microservices with hands-on coding. Build real-time and asynchronous systems integrations. Create API endpoints for internal and partner cloud systems. Document design and runbooks. Take full ownership of the integration lifecycle for multiple integrations. Requirements: Bachelor of Science in Computer Science or Engineering. Strong background in software engineering and integration. 5+ years of overall industry experience. 3+ years of hands-on experience in MuleSoft architecture and full lifecycle implementation from requirements gathering/analysis to Go-Live and Post- production support. Mandatory 2+ years of experience in RTF (Runtime Fabric). Expertise in using REST and SOAP APIs. Proficiency in building MuleSoft Integrations and APIs using Mule v4. Experience in integrating a portfolio of SaaS applications. Strong coding skills in Java. Familiarity with messaging infrastructure, preferably AWS SQS, and storage solutions like AWS S3. Experience with relational databases and solid SQL knowledge. Proficiency in using Anypoint Platform API Manager, Runtime Manager, Exchange, etc. Experience with Runtime Fabric or Kubernetes. Familiarity with Github version control, Jenkins, Maven. MuleSoft developer certification. Knowledge of securing data; understanding of PGP, SSH, OAuth, HTTPS, SFTP.
Posted 2 months ago
5.0 - 8.0 years
7 - 10 Lacs
Mumbai
Work from Office
So, whats the job? You'll lead the design, development, and optimization of scalable, maintainable, and high-performance ETL/ELT pipelines using Informatica IDMC CDI. You'll manage and optimize cloud-based storage environments, including AWS S3 buckets. You'll implement robust data integration solutions that ingest, cleanse, transform, and deliver structured and semi-structured data from diverse sources to downstream systems and data warehouses. You'll support data integration from source systems, ensuring data quality and completeness. You'll automate data loading and transformation processes using tools such as Python, SQL, and orchestration frameworks. You'll contribute to the strategic transition toward cloud-native data platforms (e.g., AWS S3, Snowflake) by designing hybrid or fully cloud-based data solutions. You'll collaborate with Data Architects to align data models and structures with enterprise standards. You'll maintain clear documentation of data pipelines, processes, and technical standards, and mentor team members in best practices and tool usage. You'll implement and enforce data security, access controls, and compliance measures in line with organizational policies. And what are we looking for? You'll have a Bachelors degree in Computer Science, Engineering, or a related field with a minimum of 5 years of industry experience. You'll be an expert in designing, developing, and optimizing ETL/ELT pipelines using Informatica IDMC Cloud Data Integration (CDI). You'll bring strong experience with data ingestion, transformation, and delivery across diverse data sources and targets. You'll have a deep understanding of data integration patterns, orchestration strategies, and data pipeline lifecycle management. You'll be proficient in implementing incremental loads, CDC (Change Data Capture), and data synchronization. You'll bring strong experience with SQL Server, including performance tuning, stored procedures, and indexing strategies. You'll possess a solid understanding of data modeling, data warehousing concepts (star/snowflake schema), and dimensional modeling. You'll have experience integrating with cloud data warehouses such as Snowflake. You'll be familiar with cloud storage and compute platforms such as AWS S3, EC2, Lambda, Glue, and RDS. You'll design and implement cloud-native data architectures using modern tools and best practices. You'll have exposure to data migration and hybrid architecture design (on-prem to cloud). You'll be experienced with Informatica Intelligent Cloud Services (IICS), especially IDMC CDI. You'll have strong proficiency in SQL, T-SQL, and scripting languages like Python or Shell. You'll have experience with workflow orchestration tools like Apache Airflow, Informatica task flows, or Control-M. You'll be knowledgeable in API integration, REST/SOAP, and file-based data exchange (e.g., SFTP, CSV, Parquet). You'll implement data validation, error handling, and data quality frameworks. You'll have an understanding of data lineage, metadata management, and governance best practices. You'll set up monitoring, logging, and alerting for ETL processes.
Posted 2 months ago
10.0 - 12.0 years
13 - 20 Lacs
Chandigarh
Work from Office
Responsibility and Role : A) Mandatory Role and Responsibility : If you match the criteria below then only you apply. - You should be strong and capable enough as a doer end-to-end Hands-on with skillsets as a Strong Full Stack Software Developer ( Backend Development and with Frontend Development), also a Technology Architect and Software Engineering Manager in a Cloud Technology Environment of AWS, GCP. - As a Multihat role Hero, You should be a Technology builder and doer, you should be in a position to do the Product Software Engineering development for developing a Highly scalable SaaS Cloud Platform for B2B, B2B2C, and B2C direct Consumer Cloud Digital Health Platform for web and Mobile Application right from its Technology Architect, Product Design and Product development, deploy with all functionality as required by the Business owner and Product owner ( Our Products end to end Mobile App Prototype developments already done and the product full step by step functionality is ready for demo and its full functionality working ) to final product delivery for its Commercialisation. - You should be highly accountable, highly responsive, and responsible in understanding the requirement and delivery, and commitment brought to the table within the set tight deadlines. - You should have Hands-on with the latest Proven Technology Stacks on any E-commerce developing Digital Platforms such as in FinTech, AgriTech, Edutech, or in Digital Health and AI Health Platform building after Mobile App Prototypes to delivery on web and mobile applications. - Their primary responsibilities include Tech Architect for building a highly scalable SaaS Cloud Platform for B2B & B2B2C consumers, E-commerce Portal, developing AWS Cloud servers and databases for AI Health based Products and solutions delivered through web and mobile app functionality, and coding for mobile platforms. - Developing front-end web e-commerce Portal architecture. - Designing user interactions on web pages. - Developing back-end website applications. - Creating servers and databases for functionality. - Ensuring cross-platform optimization for mobile phones. - Ensuring responsiveness of applications. - Working alongside graphic designers for web design features. - Seeing through a project from conception to finished product. - Designing and developing APIs, integrating with 3rd Party API for smooth Transaction and working. - Meeting both technical and consumer needs. - Staying abreast of developments in web applications and programming languages. B) Most Important : Your Personality Profile : - You should have a demonstrated track record as a Top Software developer - Performers in delivering your side with strong commitment and your side deliverables as a single-handed hands-on multi-hat role, doer - A good Team Player - Team Leader with good understanding level with your Higher up and Juniors within small Founding Team organizational with Good Behavioural Practice, Caring, Giving Respect and Good Relationship skills, well smoothly connected through with the Team and Reporting Officer with High level of engagement, Integrity, Deep Understanding and Professional Soft and Good Communication Skills is must. 4) Experience : - IT service & MNC IT industry experience candidates need not apply. - Only Early-stage to Growth stage Tech Startup single-handed multi hat role - Profile experiences Candidates need to apply. - We will only count your Tech Startup IT Product Development Full Stack development Experience for your selection and CTC Package workout. A) Total Experience of 10 to 12 Years of experience in Software Product Development Engineering & Product Development hands-on Multi Hat Role as a doer first in web and Mobile applications only in Tech Startups Preferably in Digital Health, AI Health, or any other industry in a Highly Scalable E-commerce Platform building using Latest Trend in Cloud-based Software Product Technology Preferably: 1 Year as Startup CTO / VP /AVP/ Chief Architect / Engineering Manager -Software Engineering & IT Product Development with a single-handed Multihat role hardcore Engineering Management. Mandatory : - Startup experience as a doer with at least - 2 years as Software Technology Architect, 1 Year as Solutions Architect. - 2 to 7 Years as end-to-end single-handed Multihat role Full-stack Developer ( Backend) mandatory. - Worked from scratch on New Cloud Platform Product development. - 1 Year in DevOps Development Leading managing as optional. - At least 1 year as a Software Engineering Head / Engineering Manager/ Full Stack Developer Lead in web and Mobile application Product Engineering in an early-stage to Growth stage Digital Health Startup from scratch to complete delivery of world-class products and solutions. - Worked as a single-handed independently and as a Team leads with the multi-hat role and in the team as a Backend developer, experienced in QA Testing on web and mobile app in a startup company. - Strong organizational and Software Product engineering project management skills and not in IT Services. 5) Required Job Skillset : - Proficiency with fundamental front-end languages such as HTML, CSS, and JavaScript. - Familiarity with JavaScript frameworks such as Angular JS, React, and Amber. - Django, Dynamo DB, LAMP, MEAN stack - AWS ( AWS S3, EC2, Amazon Device Farm), Android, Kotlin, React, react-native - Flutter, dart ios, Swift. - Should have additional skills in NoSQL DB, and MongoDB but it is not mandatory. - Proficiency with server-side languages such as Python, Ruby, Java, PHP, and .Net. Tensor flow. - Familiarity with database technology such as MySQL, Oracle, and MongoDB. Testing Tools : - Appium, Selenium, Robotium, Mockito. Espresso- Excellent verbal communication skills. - Completed product engineering for a highly scalable Digital Health Platform / E-commerce Platform maintaining a high level of Data Security, Data Privacy, and High-level Cybersecurity system implementation with at least 10-30 Mobile apps ( multifunction apps). - Hands-on development from scratch for a highly scalable digital Health / Digital Technology E-commerce Cloud Platform with at least 100 to 50 Million Downloads of his Product developed. 6) Qualification : - B.Tech in CSE / IT / ECE from any reputed engineering Institute. - MBA need not apply.
Posted 2 months ago
5 - 7 years
15 - 18 Lacs
Mumbai, Pune, Bengaluru
Work from Office
Hands-on experience with AWS services including S3, Lambda,Glue, API Gateway, and SQS. Strong skills in data engineering on AWS, with proficiency in Python ,pyspark & SQL. Experience with batch job scheduling and managing data dependencies. Knowledge of data processing tools like Spark and Airflow. Automate repetitive tasks and build reusable frameworks to improve efficiency. Provide Run/DevOps support and manage the ongoing operation of data services. (Immediate Joiners). Location - Bengaluru,Mumbai,Pune,Chennai,Kolkata,Hyderabad.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough