Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2 - 3 years
5 - 15 Lacs
Bengaluru
Work from Office
Role & responsibilities Design, develop, and maintain scalable ETL/ELT data pipelines Work with structured and unstructured data from various sources (APIs, databases, cloud storage, etc.) Optimize data workflows and ensure data quality, consistency, and reliability Collaborate with cross-functional teams to understand data requirements and deliver solutions Maintain and improve our data infrastructure and architecture Monitor pipeline performance and troubleshoot issues in real-time Preferred candidate profile 2-3 years of experience in data engineering or a similar role Proficiency in SQL and Python (or Scala/Java for data processing) Experience with ETL tools (e.g., Airflow, dbt, Luigi) Familiarity with cloud platforms like AWS, GCP, or Azure Hands-on experience with data warehouses (e.g., Redshift, BigQuery, Snowflake) Knowledge of distributed data processing frameworks like Spark or Hadoop Experience with version control systems (e.g., Git) Exposure to data modeling and schema design Experience working with CI/CD pipelines for data workflows Understanding of data privacy and security practices
Posted 2 months ago
4 - 8 years
10 - 15 Lacs
Pune
Remote
Position: AWS Data Engineer About bluCognition: bluCognition is an AI/ML based start-up specializing in developing data products leveraging alternative data sources and providing servicing support to our clients in financial services sector. Founded in2017, by some very named senior professionals from the financial services industry, the company is headquartered in the US, with the delivery centre based in Pune. We build all our solutions while leveraging the latest technology stack in AI, ML and NLP combined with decades of experience in risk management at some of the largest financial services firms in the world. Our clients are some of the biggest and the most progressive names in the financial services industry. We are entering a significant growth phase and are looking for individuals with entrepreneurial mindset who wants us to join in this exciting journey. https://www.blucognition.com The Role: We are seeking an experienced AWS Data Engineer to design, build, and manage scalable data pipelines and cloud-based solutions. In this role, you will work closely with data scientists, analysts, and software engineers to develop systems that support data-driven decision-making. Key Responsibilities: 1) Design, implement, and maintain robust, scalable, and efficient data pipelines using AWS services. 2) Develop ETL/ELT processes and automate data workflows for real-time and batch data ingestion. 3) Optimize data storage solutions (e.g., S3, Redshift, RDS, DynamoDB) for performance and cost-efficiency. 4) Build and maintain data lakes and data warehouses following best practices for security, governance, and compliance. 5) Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. 6) Monitor, troubleshoot, and improve the reliability and quality of data systems. 7) Implement data quality checks, logging, and error handling in data pipelines. 8) Use Infrastructure as Code (IaC) tools like AWS Cloud Formation or Terraform for environment management. 9) Stay up-to-date with the latest developments in AWS services and big data technologies. Required Qualifications: 1) Bachelors degree in Computer Science, Information Systems, Engineering, or a related field. 2) 4+ years of experience working as a data engineer or in a similar role. 3) Strong experience with AWS services such as: AWS Glue, AWS Lambda, Amazon S3, Amazon Redshift, Amazon RDS, Amazon EMR, AWS Step Functions 4) Proficiency in SQL and Python. 5) Solid understanding of data modeling, ETL processes, and data warehouse architecture. 6) Experience with orchestration tools like Apache Airflow or AWS Managed Workflows. 7) Knowledge of security best practices for cloud environments (IAM, KMS, VPC, etc.). 8) Experience with monitoring and logging tools (CloudWatch, X-Ray, etc.). Preferred Qualifications: 1) Good to have - AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect certification. 2) Experience with real-time data streaming technologies like Kinesis or Kafka. 3) Familiarity with DevOps practices and CI/CD pipelines. 4) Knowledge of machine learning data preparation and MLOps workflows. Soft Skills: 1) Excellent problem-solving and analytical skills. 2) Strong communication skills with both technical and non-technical stakeholders. 3) Ability to work independently and collaboratively in a team environment.
Posted 2 months ago
4 - 9 years
7 - 12 Lacs
Bengaluru
Work from Office
Refine ambiguous questions and generate new hypotheses about the product through a deep understanding of the data, our customers, and our business Design experiments and interpret the results to draw detailed and impactful conclusions. Define how our teams measure success, by developing Key Performance Indicators and other users/business metrics, in close partnership with Product and other subject areas such as engineering, operations and marketing Collaborate with applied scientists and engineers to build and improve on the availability, integrity, accuracy, and reliability of data logging and data pipelines. Develop data-driven business insights and work with cross-functional partners to find opportunities and recommend prioritisation of product, growth, and optimisation initiatives. ---- Basic Qualifications ---- Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields. 4+ years experience as a Product Analyst, Sr. Data Analyst, or other types of data analysis-focused functions Excellent understanding of statistical principles backed by an academic foundation Advanced SQL expertise Experience with either Python or R for data analysis Significant experience in setting up and evaluation of complex experiments Proven track record to wrangle large datasets, extract insights from data, and summarise learnings/takeaways. Experience with Excel and some dashboarding/data visualisation (i.e. Tableau, Mixpanel, Looker, or similar) ---- Preferred Qualifications ---- Proven aptitude toward Data Storytelling and Root Cause Analysis using data Ability to learn and adapt to new methodologies for data collection and analysis Ability to deliver on tight timelines and prioritise multiple tasks while maintaining quality and detail Ability to work in a self-guided manner Ability to mentor, coach and develop junior team members Superb communication and organisation skills
Posted 2 months ago
5 - 9 years
8 - 12 Lacs
Bengaluru
Work from Office
--- What the Candidate Will Do ---- Refine ambiguous questions and generate new hypotheses about the product through a deep understanding of the data, our customers, and our business Design experiments and interpret the results to draw detailed and impactful conclusions. Define how our teams measure success, by developing Key Performance Indicators and other users/business metrics, in close partnership with Product and other subject areas such as engineering, operations and marketing Collaborate with applied scientists and engineers to build and improve on the availability, integrity, accuracy, and reliability of data logging and data pipelines. Develop data-driven business insights and work with cross-functional partners to find opportunities and recommend prioritisation of product, growth, and optimisation initiatives. ---- Basic Qualifications ---- Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields. 4+ years experience as a Product Analyst, Sr. Data Analyst, or other types of data analysis-focused functions Excellent understanding of statistical principles backed by an academic foundation Advanced SQL expertise Experience with either Python or R for data analysis Significant experience in setting up and evaluation of complex experiments Proven track record to wrangle large datasets, extract insights from data, and summarise learnings/takeaways. Experience with Excel and some dashboarding/data visualisation (i.e. Tableau, Mixpanel, Looker, or similar) ---- Preferred Qualifications ---- Proven aptitude toward Data Storytelling and Root Cause Analysis using data Ability to learn and adapt to new methodologies for data collection and analysis Ability to deliver on tight timelines and prioritise multiple tasks while maintaining quality and detail Ability to work in a self-guided manner Ability to mentor, coach and develop junior team members Superb communication and organisation skills
Posted 2 months ago
1 - 4 years
7 - 11 Lacs
Bengaluru
Work from Office
---- What the Candidate Will Do ---- Design, develop, and maintain robust and scalable software solutions Find opportunities for quality improvements in the search stack and lead the entire development lifecycle end-to-end, from architecture design and coding to testing and deployment ---- Basic Qualifications ---- Bachelor's degree in Computer Science 3+ years of professional experience in software development with a track record of increasing responsibility and impact. Experience with Go and Python programming languages Demonstrated experience developing sophisticated backend systems and longer-term ownership of critical backend services and infrastructure. Bias to action and proven track record of getting things done. ---- Preferred Qualifications ---- Master's degree in Computer Science Big Data: Proficiency building data pipelines. Experience in using PySpark at scale with large data sets. Experience with t
Posted 2 months ago
5 - 8 years
22 - 30 Lacs
Bengaluru
Hybrid
Department Overview: augmented Intelligence (augIntel) builds automation, analytics and artificial intelligence solutions that drive success for Cardinal Health by creating material savings, efficiencies and revenue growth opportunities. The team drives business innovation by leveraging emerging technologies and turning them into differentiating business capabilities. Job Overview: Designing, building and operationalizing large-scale enterprise data solutions and applications using one or more of Google Cloud Platform data and analytics services in combination with technologies like Spark, Cloud DataProc, Cloud Dataflow, Apache Beam, Cloud BigQuery, Cloud PubSub, Cloud Functions, Airflow. Responsibilities: Designing and implementing data transformation, ingestion and curation functions on GCP cloud using GCP native or custom programming Designing and building production data pipelines from ingestion to consumption within a hybrid big data architecture, using Python. Optimizing data pipelines for performance and cost for large scale data lakes. Desired Qualifications: Bachelor's degree preferred or equivalent work experience. 5+ years of engineering experience in , Data Analytics and Data Integration related fields. 3+ years of experience writing complex SQL queries, stored procedures, etc. 1+ years of hands-on GCP experience in Data Engineering and Cloud Analytics solutions. Experience in designing and optimizing data models on GCP cloud using GCP data stores such as BigQuery. Agile development skills and experience. Experience with CI/CD pipelines such as Concourse, Jenkins. Google Cloud Platform certification is a plus. Perks and benefits Variable pay, WiFi reimbursement, Cab facilities, shift allowance (1PM-10PM)
Posted 2 months ago
5 - 9 years
12 - 36 Lacs
Hyderabad
Work from Office
Sr. AI/ML Python Developer - 5-8 yrs * cross-functional teams on data analysis & statistics. * Develop ML models - Python, NumPy, Pandas, DLS & NLP. * Imple data pipelines, deploy models on TensorFlow Serving & GCP. Drop to rajkalyan@garudaven.com Food allowance Annual bonus Provident fund Health insurance Office cab/shuttle
Posted 2 months ago
5 - 8 years
13 - 18 Lacs
Mohali, Gurugram, Bengaluru
Work from Office
Job Title: Sr Data Engineer – Snowflake & Python About the Role: We are seeking a skilled and proactive Sr Data Developer with 5-8 years of hands-on experience in Snowflake , Python , Streamlit , and SQL , along with expertise in consuming REST APIs and working with modern ETL tools like Matillion, Fivetran etc. The ideal candidate will have a strong foundation in data modeling , data warehousing , and data profiling , and will play a key role in designing and implementing robust data solutions that drive business insights and innovation. Key Responsibilities: Design, develop, and maintain data pipelines and workflows using Snowflake and an ETL tool (e.g., Matillion, dbt, Fivetran, or similar). Develop data applications and dashboards using Python and Streamlit. Create and optimize complex SQL queries for data extraction, transformation, and loading. Integrate REST APIs for data access and process automation. Perform data profiling, quality checks, and troubleshooting to ensure data accuracy and integrity. Design and implement scalable and efficient data models aligned with business requirements. Collaborate with data analysts, data scientists, and business stakeholders to understand data needs and deliver actionable solutions. Implement best practices in data governance, security, and compliance. Required Skills and Qualifications: Experience in HR Data and databases is a must. 5–8 years of professional experience in a data engineering or development role. Strong expertise in Snowflake , including performance tuning and warehouse optimization. Proficient in Python , including data manipulation with libraries like Pandas. Experience building web-based data tools using Streamlit . Solid understanding and experience with RESTful APIs and JSON data structures. Strong SQL skills and experience with advanced data transformation logic. Experience with an ETL tool commonly used with Snowflake (e.g., dbt , Matillion , Fivetran , Airflow ). Hands-on experience in data modeling (dimensional and normalized), data warehousing concepts , and data profiling techniques . Familiarity with version control (e.g., Git) and CI/CD processes is a plus. Preferred Qualifications: Experience working in cloud environments (AWS, Azure, or GCP). Knowledge of data governance and cataloging tools. Experience with agile methodologies and working in cross-functional teams. Experience in HR Data and databases. Experience in Azure Data Factory
Posted 2 months ago
3 - 6 years
10 - 20 Lacs
Gurugram
Work from Office
About ProcDNA: ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge tech to create game-changing Commercial Analytics and Technology solutions for our clients. We're a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won't be stuck in a cubicle - you'll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn't just encouraged, it's ingrained in our DNA. What we are looking for: As the Associate Engagement Lead, youll leverage data to unravel complexities, adept at devising strategic solutions that deliver tangible results for our clients. We are seeking an individual who not only possesses the requisite expertise but also thrives in the dynamic landscape of a fast-paced global firm. What youll do Design/implement complex and scalable enterprise data processing and BI reporting solutions. Design, build and optimize ETL pipelines or underlying code to enhance data warehouse systems. Work towards optimizing the overall costs incurred due to system infrastructure, operations, change management etc. Deliver end-to-end data solutions across multiple infrastructures and applications Coach, mentor, and manage a team of junior associates to help them (plan tasks effectively and more). Demonstrate overall client stakeholder and project management skills (drive client meetings, creating realistic project timelines, planning and managing individual and team's task). Assist senior leadership in business development proposals focused on technology by providing SME support. Build strong partnerships with other teams to create valuable solutions Stay up to date with latest industry trends. Must have 3- 5 years of experience in designing/building data warehouses and BI reporting with a B.Tech/B.E background Prior experience of managing client stakeholders and junior team members. A background in managing Life Science clients is mandatory. Proficient in big data processing and cloud technologies like AWS, Azure, Databricks, PySpark, Hadoop etc. Along with proficiency in Informatica is a plus. Extensive hands-on experience in working with cloud data warehouses like Redshift, Azure, Snowflake etc. And Proficiency in SQL, Data modelling, designing ETL pipelines is a must. Intermediate to expert-level proficiency in Python. Proficiency in either Tableau, PowerBI, Qlik is a must. Should have worked on large datasets and complex data modelling projects. Prior experience in business development activities is mandatory. Domain knowledge of the pharma/healthcare landscape is mandatory.
Posted 2 months ago
10 - 15 years
0 - 0 Lacs
Chennai
Work from Office
About the Role As a Senior Data Engineer you’ll be a core part of our engineering team. You will bring your valuable experience and knowledge, improving the technical quality of our data-focused products. This is a key role in helping us become more mature, deliver innovative new products and unlock further business growth. This role will be part of a newly formed team that will collaborate alongside data team members based in Ireland, USA and India. Following the successful delivery of some fantastic products in 2024, we have embarked upon a data-driven strategy in 2025. We have a huge amount of data and are keen to accelerate unlocking its value to delight our customers and colleagues. You will be tasked with delivering new data pipelines, actionable insights in automated ways and enabling innovative new product features. Reporting to our Team Lead, you will be collaborating with the engineering and business teams. You’ll work across all our brands, helping to shape their future direction. Working as part of a team, you will help shape the technical design of our platforms and solve complicated problems in elegant ways that are robust, scalable, and secure. We don’t get everything right first time, but you will help us reflect, adjust and be better next time around. We are looking for people who are inquisitive, confident exploring unfamiliar problems, and have a passion for learning. We don’t have all the answers and don’t expect you to know everything either. Our team culture is open, inclusive, and collaborative – we tackle goals together. Seeking the best solution to a problem, we actively welcome ideas and opinions from everyone in the team. Our Technologies We are continuously evolving our products and exploring new opportunities. We are focused on selecting the right technologies to solve the problem at hand. We know the technologies we’ll be using in 3 years’ time will probably be quite different to what we’re using today. You’ll be a key contributor to evolving our tech stack over time. Our data pipelines are currently based upon Google BigQuery, FiveTran and DBT Cloud. These involve advanced SQL alongside Python in a variety of areas. We don’t need you to be an expert with these technologies, but it will help if you’re strong with something similar. Your Skills and Experience This is an important role for us as we scale up the team and we are looking for someone who has existing experience at this level. You will have worked with data driven platforms that involve some kind of transaction, such as eCommerce, trading platforms or advertising lead generation. Your broad experience and knowledge of data engineering methods mean you’re able to build high quality products regardless of the language used – solutions that avoid common pitfalls impacting the platform’s technical performance. You can apply automated approaches for tracking and measuring quality throughout the whole lifecycle, through to the production environments. You are comfortable working with complex and varied problems. As a strong communicator, you work well with product owners and business stakeholders. You’re able to influence and persuade others by listening to their views, explaining your own thoughts, and working to achieve agreement. We have many automotive industry experts within our team already and they are eager to teach you everything you need to know for this role. Any existing industry knowledge is a bonus but is not necessary. This is a full-time role based in our India office on a semi-flexible basis. Our engineering team is globally distributed but we’d like you to be accessible to the office for ad-hoc meetings and workshops.
Posted 2 months ago
3 - 5 years
6 - 8 Lacs
Pune
Work from Office
Job Title: Senior Data Engineer Experience Required: 3 to 5 Years Location: Baner, Pune Job Type: Full-Time (WFO) Job Summary We are seeking a highly skilled and motivated Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and managing scalable data pipelines, working with cloud platforms like Microsoft Azure, AWS and utilizing advanced tools such as Datalakes, PySpark, and Azure Data Factory. The role involves collaborating with cross-functional teams to design and implement robust data solutions that support business intelligence, analytics, and decision-making processes. Key Responsibilities Design, develop, and maintain scalable ETL pipelines to ingest, transform, and process large datasets from various sources. Build and optimize data pipelines and architectures for efficient and secure data processing. Work extensively with Azure Data Lake , Azure Data Factory , and Azure Synapse Analytics for cloud data integration and management. Utilize Databricks and PySpark for advanced big data processing and analytics. Implement data modelling and design data warehouses to support business intelligence tools like Power BI . Ensure data quality, governance, and security using Azure DevOps and Azure Functions . Develop and maintain SQL Server databases and write optimized SQL queries for analytics and reporting. Collaborate with stakeholders to gather requirements and translate them into effective data engineering solutions. Implement Data architecture best practices to support big data initiatives and analytics use cases. Monitor, troubleshoot, and improve data workflows and processes to ensure seamless data flow. Required Skills and Qualifications Educational Background : Bachelor's or master's degree in computer science, Information Systems, or a related field. Technical Skills : Strong expertise in ETL development , Data Engineering , and Data Pipeline -Development . Proficiency in Azure Data Lake , Azure Data Factory , and Azure Synapse Analytics . Advanced knowledge of Databricks , PySpark , and Python for data processing. Hands-on experience with SQL Azure , SQL Server , and data warehousing solutions. Knowledge of Power BI for reporting and dashboard creation. Familiarity with Azure Functions , Azure DevOps , and cloud computing in Microsoft Azure . Understanding of data architecture and data modelling principles. Experience with Big Data tools and frameworks. Experience : Proven experience in designing and implementing large-scale data processing systems. Hands-on experience with DWH and handling big data workloads. Ability to work with both structured and unstructured datasets. Soft Skills : Strong problem-solving and analytical skills. Excellent communication and collaboration abilities to work effectively in a team environment. A proactive mindset with a passion for learning and adopting new technologies. Preferred Skills Experience with Azure Data Warehouse technologies. Knowledge of Azure Machine Learning or similar AI/ML frameworks. Familiarity with Data Governance and Data Compliance practices.
Posted 2 months ago
1 - 4 years
0 - 0 Lacs
Chennai
Hybrid
We're seeking a Data Operations Specialist (1–4 yrs exp.) in Chennai to manage and support data platforms and pipelines on Azure and on-premises, ensuring system reliability, performance, and availability.
Posted 2 months ago
2 - 4 years
4 - 6 Lacs
Hyderabad
Work from Office
Data Engineer Graph Research Data and Analytics What you will do Lets do this. Lets change the world. In this vital role you will be part Researchs Semantic Graph Team is seeking a qualified individual to design, build, and maintain solutions for scientific data that drive business decisions for Research. The successful candidate will construct scalable and high-performance data engineering solutions for extensive scientific datasets and collaborate with Research partners to address their data requirements. The ideal candidate should have experience in the pharmaceutical or biotech industry, leveraging their expertise in semantics, taxonomies, and linked data principles to ensure data harmonization and interoperability. Additionally, this individual should demonstrate robust technical skills, proficiency with data engineering technologies, and a thorough understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain semantic data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to standard processes for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain comprehensive documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. T Basic Qualifications and Experience: Doctorate Degree OR Masters degree with 2- 4years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 4- 6years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 7- 9 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 4+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with data technologies and platforms, such as Databricks, workflow orchestration, performance tuning on big data processing. Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Experience with system administration skills, such as managing Linux and Windows servers, configuring network infrastructure, and automating tasks with shell scripting. Examples include setting up and maintaining virtual machines, troubleshooting server issues, and ensuring data security through regular updates and backups. Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining user documentation in Confluence Understanding of data governance frameworks, tools, and standard processes Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 2 months ago
14 - 20 years
35 - 45 Lacs
Bengaluru
Work from Office
Seeking skilled & experienced Technical Manager to oversee the successful planning#Lead end-to-end solution delivery, from requirements gathering to implementation#Coordinate with engineering, product management, & QA teams, ensure seamless execution Required Candidate profile Strong technical background#Experience in software engineering, Solution delivery, & Project management#Knowledge on Agile methodologies#Certifications in PMP, Scrum Master, ITIL & RTE certification.
Posted 2 months ago
4 - 8 years
0 - 1 Lacs
Mohali
Work from Office
Job Title : Snowflake Developer (4+ years' experience) Location : F, 384, Sector 91 Rd, Phase 8B, Industrial Area, Sector 91, Sahibzada Ajit Singh Nagar, Punjab 160055. Job Type : Fulltime (In-house) Job Overview : We are looking for an experienced Snowflake Developer with 4+ years of hands-on experience in Snowflake Data Warehouse and related tools. You will be responsible for building, managing, and optimizing Snowflake data pipelines, assisting in data integration, and contributing to overall data architecture. The ideal candidate should have a strong understanding of data modeling, ETL processes, and experience working with cloud-based data platforms. Responsibilities : Design, develop, and maintain Snowflake Data Warehouses. Create and manage Snowflake schema, tables, views, and materialized views. Implement ETL processes to integrate data from various sources into Snowflake. Optimize query performance and data storage in Snowflake. Work with stakeholders to define data requirements and provide technical solutions. Collaborate with Data Engineers, Data Scientists, and Analysts to build efficient data pipelines. Monitor and troubleshoot performance issues in Snowflake environments. Automate repetitive data processes and report generation tasks. Ensure data integrity, security, and compliance with data governance policies. Assist in data migration and platform upgrades. Required Skills : 4+ years of experience working with Snowflake Data Warehouse . Proficient in SQL , SnowSQL , and ETL processes . Strong experience in data modeling and schema design in Snowflake. Experience with cloud platforms (AWS, Azure, or GCP). Familiarity with data pipelines , data lakes, and data integration tools. Experience in query optimization and performance tuning in Snowflake. Understanding of data governance and best practices. Strong knowledge of data security and privacy policies in a cloud environment. Experience in using tools like dbt , Airflow , or similar orchestration tools is a plus. #Salary: No bar for deserving candidates. Location: - Mohali Punjab (Work from office) Shift:- Night Shift Other Benefits: 5 Days working US based work culture and environment Indoor and Outdoor events Paid Leaves Health Insurance Employee engagement activities like month end & festival celebration, team outing, birthday celebrations. Gaming and sports area Please comment/DM to know more. You may also e-mail your resume to me at priyankaaggarwal@sourcemash.com
Posted 2 months ago
4 - 9 years
12 - 16 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Urgent Hiring for one of the reputed MNC Immediate Joiners Only Females Exp - 4-9 Years Bang / Hyd / Pune As a Python Developer with AWS , you will be responsible for developing cloud-based applications, building data pipelines, and integrating with various AWS services. You will work closely with DevOps, Data Engineering, and Product teams to design and deploy solutions that are scalable, resilient, and efficient in an AWS cloud environment. Key Responsibilities: Python Development : Design, develop, and maintain applications and services using Python in a cloud environment. AWS Cloud Services : Leverage AWS services such as EC2 , S3 , Lambda , RDS , DynamoDB , and API Gateway to build scalable solutions. Data Pipelines : Develop and maintain data pipelines, including integrating data from various sources into AWS-based storage solutions. API Integration : Design and integrate RESTful APIs for application communication and data exchange. Cloud Optimization : Monitor and optimize cloud resources for cost efficiency, performance, and security. Automation : Automate workflows and deployment processes using AWS Lambda , CloudFormation , and other automation tools. Security & Compliance : Implement security best practices (e.g., IAM roles, encryption) to protect data and maintain compliance within the cloud environment. Collaboration : Work with DevOps, Cloud Engineers, and other developers to ensure seamless deployment and integration of applications. Continuous Improvement : Participate in the continuous improvement of development processes and deployment practices. Required Qualifications: Python Expertise : Strong experience in Python programming, including using libraries like Pandas , NumPy , Boto3 (AWS SDK for Python), and frameworks like Flask or Django . AWS Knowledge : Hands-on experience with AWS services such as S3 , EC2 , Lambda , RDS , DynamoDB , CloudFormation , and API Gateway . Cloud Infrastructure : Experience in designing, deploying, and maintaining cloud-based applications using AWS. API Development : Experience in designing and developing RESTful APIs, integrating with external services, and managing data exchanges. Automation & Scripting : Experience with automation tools and scripts (e.g., using AWS Lambda , Boto3 , CloudFormation ). Version Control : Proficiency with version control tools such as Git . CI/CD Pipelines : Experience building and maintaining CI/CD pipelines for cloud-based applications. Preferred candidate profile Familiarity with serverless architectures using AWS Lambda and other AWS serverless services. AWS Certification (e.g., AWS Certified Developer Associate , AWS Certified Solutions Architect Associate ) is a plus. Knowledge of containerization tools like Docker and orchestration platforms such as Kubernetes . Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation .
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough