Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : AWS Architecture Good to have skills : Python (Programming Language) Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions that enhance data accessibility and usability. AWS Data Architect to lead the design and implementation of scalable, cloud-native data platforms. The ideal candidate will have deep expertise in AWS data services, along with hands-on proficiency in Python and PySpark for building robust data pipelines and processing frameworks. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve data processes to ensure efficiency and effectiveness. - Design and implement enterprise-scale data lake and data warehouse solutions on AWS. - Lead the development of ELT/ETL pipelines using AWS Glue, EMR, Lambda, and Step Functions, with Python and PySpark. - Work closely with data engineers, analysts, and business stakeholders to define data architecture strategy. - Define and enforce data modeling, metadata, security, and governance best practices. - Create reusable architectural patterns and frameworks to streamline future development. - Provide architectural leadership for migrating legacy data systems to AWS. - Optimize performance, cost, and scalability of data processing workflows. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Architecture. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with data warehousing concepts and technologies. - Knowledge of programming languages such as Python or Java for data processing. - AWS Services: S3, Glue, Athena, Redshift, EMR, Lambda, IAM, Step Functions, CloudFormation or Terraform - Languages: Python ,PySpark .SQL - Big Data: Apache Spark, Hive, Delta Lake - Orchestration & DevOps: Airflow, Jenkins, Git, CI/CD pipelines - Security & Governance: AWS Lake Formation, Glue Catalog, encryption, RBAC - Visualization: Exposure to BI tools like QuickSight, Tableau, or Power BI is a plus Additional Information: - The candidate should have minimum 5 years of experience in AWS Architecture. - This position is based at our Pune office. - A 15 years full time education is required.
Posted 4 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a skilled Data Engineer with a solid background in building and maintaining scalable data pipelines and systems. You will work closely with data analysts, engineering teams, and business stakeholders to ensure seamless data flow across platforms. Responsibilities Design, build, and optimize robust, scalable data pipelines (batch and streaming). Develop ETL/ELT processes using tools like Airflow, DBT, or custom scripts. Integrate data from various sources (e. g., APIs, S3 databases, SaaS tools). Collaborate with analytics and product teams to ensure high-quality datasets. Monitor pipeline performance and troubleshoot data quality or latency issues. Work with cloud data warehouses (e. g., Redshift, Snowflake, BigQuery). Implement data validation, error handling, and alerting for production jobs. Maintain documentation for pipelines, schemas, and data sources. Requirements 3+ years of experience in Data Engineering or similar roles. Strong in SQL and experience with data modeling and transformation. Hands-on experience with Python or Scala for scripting/data workflows. Experience working with Airflow, AWS (S3 Redshift, Lambda), or equivalent cloud tools. Knowledge of version control (Git) and CI/CD workflows. Strong problem-solving and communication skills. Good To Have Experience with DBT, Kafka, or real-time data processing. Familiarity with BI tools(e. g., Tableau, Looker, Power BI). Exposure to Docker, Kubernetes, or DevOps practices. This job was posted by Harika K from Invictus.
Posted 4 days ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
As a Senior Data Engineer, you will architect, build, and maintain our data infrastructure that powers critical business decisions. You will work closely with data scientists, analysts, and product teams to design and implement scalable solutions for data processing, storage, and retrieval. Your work will directly impact our ability to leverage data for business intelligence, machine learning initiatives, and customer insights. Responsibilities Design, build, and maintain our end-to-end data infrastructure on AWS and GCP cloud platforms. Develop and optimize ETL/ELT pipelines to process large volumes of data from multiple sources. Build and support data pipelines for reporting, analytics, and machine learning applications. Implement and manage streaming data solutions using Kafka and other technologies. Design and optimize database schemas and data models in ClickHouse and other databases. Develop and maintain data workflows using Apache Airflow and similar orchestration tools. Write efficient, maintainable, and scalable code using PySpark and other data processing frameworks. Collaborate with data scientists to implement ML infrastructure for model training and deployment. Ensure data quality, reliability, and security across all data platforms. Monitor data pipelines and implement proactive alerting systems. Troubleshoot and resolve data infrastructure issues. Document data flows, architectures, and processes. Stay current with industry trends and emerging technologies in data engineering. Requirements Bachelor's degree in Computer Science, Engineering, or related technical field (Master's preferred). 5+ years of experience in data engineering roles. Strong expertise in AWS and/or GCP cloud platforms and services. Proficiency in building data pipelines using modern ETL/ELT tools and frameworks. Experience with stream processing technologies such as Kafka. Hands-on experience with ClickHouse or similar analytical databases. Strong programming skills in Python and experience with PySpark. Experience with workflow orchestration tools like Apache Airflow. Solid understanding of data modeling, data warehousing concepts, and dimensional modeling. Knowledge of SQL and NoSQL databases. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work in cross-functional teams. Experience in D2C, e-commerce, or retail industries. Knowledge of data visualization tools (Tableau, Looker, Power BI). Experience with real-time analytics solutions. Familiarity with CI/CD practices for data pipelines. Experience with containerization technologies (Docker, Kubernetes). Understanding of data governance and compliance requirements. Experience with MLOps or ML engineering Technologies. Cloud Platforms: AWS (S3 Redshift, EMR, Lambda), GCP (BigQuery, Dataflow, Dataproc). Data Processing: Apache Spark, PySpark, Python, SQL. Streaming: Apache Kafka, Kinesis. Data Storage: ClickHouse, S, 3 BigQuery, PostgreSQL, MongoDB. Orchestration: Apache Airflow. Version Control: Git. Containerization: Docker, Kubernetes (optional). This job was posted by Sidharth Patra from Traya Health.
Posted 4 days ago
2.0 - 4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
We are looking for a highly skilled and hands-on Senior Data Engineer to join our growing data engineering practice in Mumbai. This role requires deep technical expertise in building and managing enterprise-grade data pipelines, with a primary focus on Amazon Redshift, AWS Glue, and data orchestration using Airflow or Step Functions. You will be responsible for building scalable, high-performance data workflows that ingest and process multi-terabyte-scale data across complex, concurrent environments. The ideal candidate is someone who thrives in solving performance bottlenecks, has led or participated in data warehouse migrations (e. g., Snowflake to Redshift), and is confident in interfacing with business stakeholders to translate requirements into robust data solutions. Responsibilities Design, develop, and maintain high-throughput ETL/ELT pipelines using AWS Glue (PySpark), orchestrated via Apache Airflow or AWS Step Functions. Own and optimize large-scale Amazon Redshift clusters and manage high concurrency workloads for a very large user base: Lead and contribute to migration projects from Snowflake or traditional RDBMS to Redshift, ensuring minimal downtime and robust validation. Integrate and normalize data from heterogeneous sources, including REST APIs, AWS Aurora (MySQL/Postgres), streaming inputs, and flat files. Implement intelligent caching strategies, leverage EC2 and serverless compute (Lambda, Glue) for custom transformations and processing at scale. Write advanced SQL for analytics, data reconciliation, and validation, demonstrating strong SQL development and tuning experience. Implement comprehensive monitoring, alerting, and logging for all data pipelines to ensure reliability, availability, and cost optimization. Collaborate directly with product managers, analysts, and client-facing teams to gather requirements and deliver insights-ready datasets. Champion data governance, security, and lineage, ensuring data is auditable and well-documented across all environments. Requirements 2-4 years of core data engineering experience, especially focused on Amazon Redshift hands-on performance tuning and large-scale management capacity. Demonstrated experience handling multi-terabyte Redshift clusters, concurrent query loads, and managing complex workload segmentation and queue priorities. Strong experience with AWS Glue (PySpark) for large-scale ETL jobs. Solid understanding and implementation experience of workflow orchestration using Apache Airflow or AWS Step Functions. Strong proficiency in Python, advanced SQL, and data modeling concepts. Familiarity with CI/CD pipelines, Git, DevOps processes, and infrastructure-as-code concepts. Experience with Amazon Athena, Lake Formation, or S3-based data lakes. Hands-on participation in Snowflake, BigQuery, or Teradata migration projects. AWS Certifications such as: AWS Certified Data Analytics - Specialty. AWS Certified Solutions Architect - Associate/Professional. Exposure to real-time streaming architectures or Lambda architectures. Soft Skills & Expectations Excellent communication skills enable able to confidently engage with both technical and non-technical stakeholders, including clients. Strong problem-solving mindset and a keen attention to performance, scalability, and reliability. Demonstrated ability to work independently, lead tasks, and take ownership of large-scale systems. Comfortable working in a fast-paced, dynamic, and client-facing environment. This job was posted by Rituza Rani from Oneture Technologies.
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: We are looking for a skilled and experienced Data Engineer with over 5 years of experience in data engineering and data migration projects. The ideal candidate should possess strong expertise in SQL, Python, data modeling, data warehousing, and ETL pipeline development. Experience with big data tools like Hadoop and Spark, along with AWS services such as Redshift, S3, Glue, EMR, and Lambda, is essential. This role provides an excellent opportunity to work on large-scale data solutions, enabling data-driven decision-making and operational excellence. Key Responsibilities: • Design, build, and maintain scalable data pipelines and ETL processes. • Develop and optimize data models and data warehouse architectures. • Implement and manage big data technologies and cloud-based data solutions. • Perform data migration, data transformation, and integration from multiple sources. • Collaborate with data scientists, analysts, and business teams to understand data needs and deliver solutions. • Ensure data quality, consistency, and security across all data pipelines and storage systems. • Optimize performance and manage cost-efficient AWS cloud resources. Basic Qualifications: • Master's degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or equivalent. • 5+ years of experience in Data Engineering and data migration projects. • Proficient in SQL and Python for data processing and analysis. • Strong experience in data modeling, data warehousing, and building data pipelines. • Hands-on experience with big data technologies like Hadoop, Spark, etc. • Expertise in AWS services including Redshift, S3, AWS Glue, EMR, Kinesis, Firehose, Lambda, and IAM. • Understanding of ETL development best practices and principles. Preferred Qualifications: • Knowledge of data security and data privacy best practices. • Experience with DevOps and CI/CD practices related to data workflows. • Familiarity with data lake architectures and real-time data streaming. • Strong problem-solving abilities and attention to detail. • Excellent verbal and written communication skills. • Ability to work independently and in a team-oriented environment. Good to Have: • Experience with orchestration tools like Airflow or Step Functions. • Exposure to BI/Visualization tools like QuickSight, Tableau, or Power BI. • Understanding of data governance and compliance standards.
Posted 4 days ago
10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Role - Data Analytics Architect Exp . - 10+ years Location - PAN India (Prefer - Thane, Mumbai, Hyderabad) Required Technical Skill Set - Snowflake Desired Competencies (Technical/Behavioural Competency): Experience in architecture definition, design and implementation of data lake solutions on Azure/ AWS/ Snowflake . Designs and models Data lake architecture, implements standards, best practices and processes to improve the management of information and data throughout its lifecycle across this platform. Design and implement data engineering, ingestion and curation functions on data lake using native components or custom programming (Azure/ AWS). Proficient in tools/ technologies – Azure (Azure data factory, synapse, ADLS, Data bricks etc.)/ AWS (Redshift, S3, Glue, Athena, DynamoDB etc.)/ Snowflake technology stack/ Talend/ Informatica Analyse data requirements, application and processing architectures, data dictionaries and database schema(s). Analyzes complex data systems and documents data elements, data flow, relationships, and dependencies. Collaborates with Infrastructure and Security Architects to ensure alignment with Enterprise standards and designs Data Modelling, Data Warehousing, Dimensional Modelling, Data Modelling for Big Data & Metadata Management. Knowledge on Data catalogue tools, metadata management and data quality management Experience in design & implementation of Dashboards using tools like Power BI, Qlik etc. Strong oral and written Communication skills. Good presentation skills Analytical Skills Business orientation & acumen (exposure) Advisory experience, to be able to position or seen as an expert Willingness to travel internationally, collocate with clients for short or long term Basic knowledge of advanced analytics. Exposure to leveraging Artificial intelligence and Machine Learning for analysis of complex and large datasets. Tools like Python/ Scala etc. Responsibilities Executing various consulting & implementation engagements for Data lake solutions Data integration, Data modelling, data delivery, statistics, analytics and math Identify right solutions to business problems Learn and Leverage tools/ technologies and product solutions in Data & Analytics area Implement advanced analytics, cognitive analytics models Support RFPs by providing business perspective, participate in RFP discussions, coordination within support groups in TCS Conduct business research and demonstrate thought leadership through analyst engagements, white papers and participation in industry focus areas
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Engineer Location: Hyderabad Experience: 5+Years Job Summary: We are looking for a skilled and experienced Data Engineer with over 5 years of experience in data engineering and data migration projects. The ideal candidate should possess strong expertise in SQL, Python, data modeling, data warehousing, and ETL pipeline development. Experience with big data tools like Hadoop and Spark, along with AWS services such as Redshift, S3, Glue, EMR, and Lambda, is essential. This role provides an excellent opportunity to work on large-scale data solutions, enabling data-driven decision-making and operational excellence. Key Responsibilities: • Design, build, and maintain scalable data pipelines and ETL processes. • Develop and optimize data models and data warehouse architectures. • Implement and manage big data technologies and cloud-based data solutions. • Perform data migration, data transformation, and integration from multiple sources. • Collaborate with data scientists, analysts, and business teams to understand data needs and deliver solutions. • Ensure data quality, consistency, and security across all data pipelines and storage systems. • Optimize performance and manage cost-efficient AWS cloud resources. Basic Qualifications: • Master's degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or equivalent. • 5+ years of experience in Data Engineering and data migration projects. • Proficient in SQL and Python for data processing and analysis. • Strong experience in data modeling, data warehousing, and building data pipelines. • Hands-on experience with big data technologies like Hadoop, Spark, etc. • Expertise in AWS services including Redshift, S3, AWS Glue, EMR, Kinesis, Firehose, Lambda, and IAM. • Understanding of ETL development best practices and principles. Preferred Qualifications: • Knowledge of data security and data privacy best practices. • Experience with DevOps and CI/CD practices related to data workflows. • Familiarity with data lake architectures and real-time data streaming. • Strong problem-solving abilities and attention to detail. • Excellent verbal and written communication skills. • Ability to work independently and in a team-oriented environment. Good to Have: • Experience with orchestration tools like Airflow or Step Functions. • Exposure to BI/Visualization tools like QuickSight, Tableau, or Power BI. • Understanding of data governance and compliance standards. Why Join Us? People Tech Group has significantly grown over the past two decades, focusing on enterprise applications and IT services. We are headquartered in Bellevue, Washington, with a presence across the USA, Canada, and India. We are also expanding to the EU, ME, and APAC regions. With a strong pipeline of projects and satisfied customers, People Tech has been recognized as a Gold Certified Partner for Microsoft and Oracle. Benefits: L1 Visa opportunities to the USA after 1 year of a proven track record. Competitive wages with private healthcare cover. Incentives for certifications and educational assistance for relevant courses. Support for family with maternity leave. Complimentary daily lunch and participation in employee resource groups. For more details, please visit People Tech Group.
Posted 4 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description What will you be doing? Develop Real Time Streaming & Batch Data Pipelines. Deliver high-quality data engineering components and services that are robust and scalable. Collaborate and communicate effectively with cross-functional teams to ensure delivery of strong results. Employ methodical approaches to Data Modeling, Data Quality, and Data Governance. Provide guidance on architecture, design, and quality engineering practices to the team. Leverage foundational Data Infrastructure to support analytics, BI, and visualization layers. Work closely with data scientists on feature engineering, model training frameworks, and model deployments at scale. What are we looking for? BS/MS in Computer Science or related field, or an equivalent combination of education and experience. A minimum of 6 years of experience in software engineering, with hands-on experience in building data pipelines and big data technologies. Proficiency with Big Data technologies such as Apache Spark, Apache Iceberg, Amazon Redshift, Athena, EMR, and other AWS services (S3, Lambda, EMR). Expertise in at least one programming language: Python, Java, or Scala. Extensive experience in designing and building data models, integrating data from various sources, building ETL/ELT and data-flow pipelines, and supporting all parts of the data platform. Expert-level SQL programming knowledge and experience. Experience with any enterprise reporting and/or data visualization tools like Strategy, Cognos, Tableau, Looker, PowerBI, Superset, QlikView etc. Strong data analysis skills, capable of making data-driven arguments and effective visualizations. Energetic, enthusiastic, and detail-oriented. Bonus Points Experience in e-commerce/retail domain. Knowledge on StarRocks. Knowledge in Web Services, API integration, and data exchanges with third parties. Familiarity with basic statistical analysis and machine learning concepts. A passion for producing high-quality analytics deliverables.
Posted 4 days ago
6.0 years
0 Lacs
Hyderābād
On-site
What will you be doing? Develop Real Time Streaming & Batch Data Pipelines. Deliver high-quality data engineering components and services that are robust and scalable. Collaborate and communicate effectively with cross-functional teams to ensure delivery of strong results. Employ methodical approaches to Data Modeling, Data Quality, and Data Governance. Provide guidance on architecture, design, and quality engineering practices to the team. Leverage foundational Data Infrastructure to support analytics, BI, and visualization layers. Work closely with data scientists on feature engineering, model training frameworks, and model deployments at scale. What are we looking for? BS/MS in Computer Science or related field, or an equivalent combination of education and experience. A minimum of 6 years of experience in software engineering, with hands-on experience in building data pipelines and big data technologies. Proficiency with Big Data technologies such as Apache Spark, Apache Iceberg, Amazon Redshift, Athena, EMR, and other AWS services (S3, Lambda, EMR). Expertise in at least one programming language: Python, Java, or Scala. Extensive experience in designing and building data models, integrating data from various sources, building ETL/ELT and data-flow pipelines, and supporting all parts of the data platform. Expert-level SQL programming knowledge and experience. Experience with any enterprise reporting and/or data visualization tools like Strategy, Cognos, Tableau, Looker, PowerBI, Superset, QlikView etc. Strong data analysis skills, capable of making data-driven arguments and effective visualizations. Energetic, enthusiastic, and detail-oriented. Bonus Points Experience in e-commerce/retail domain. Knowledge on StarRocks. Knowledge in Web Services, API integration, and data exchanges with third parties. Familiarity with basic statistical analysis and machine learning concepts. A passion for producing high-quality analytics deliverables.
Posted 4 days ago
5.0 years
3 - 4 Lacs
Hyderābād
On-site
Job Description Analytics Engineer We are seeking a talented, motivated and self-driven professional to join the HH Digital, Data & Analytics (HHDDA) organization and play an active role in Human Health transformation journey to become the premier “Data First” commercial biopharma organization. As a Analytics Engineer, you will be part of the HHDDA Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include: Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education: B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required experience: 5+ years of relevant work experience in the pharmaceutical/life sciences industry, with demonstrated hands-on experience in analyzing, modeling and extracting insights from commercial/marketing analytics datasets (specifically, real-world datasets) High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Experience creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Experience with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Experience with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Experience in analytics use cases of pharmaceutical products and vaccines Experience in market analytics and related use cases Preferred experience: Experience in analytics use cases focused on informing marketing strategies and commercial execution of pharmaceutical products and vaccines Experience with Agile ways of working, leading or working as part of scrum teams Certifications in AWS and/or modern data technologies Knowledge of the commercial/marketing analytics data landscape and key data sources/vendors Experience in building data models for data science and visualization/reporting products, in collaboration with data scientists, report developers and business stakeholders Experience with data visualization technologies (e.g, PowerBI) We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills: Job Posting End Date: 08/31/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R335386
Posted 4 days ago
5.0 years
3 - 5 Lacs
Gurgaon
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 4 days ago
0 years
0 Lacs
Gurgaon
On-site
8-10yrs of Operational knowledge of Microservices & .Net Fullstack ,C# or python development , as well as in Docker Experience with PostgreSQL or Oracle Knowledge of AWS S3, and optionally AWS Kinesis and AWS Redshift Real desire to master new technologies Unit test & TDD methodology are assets Team spirit, analytical and synthesis skills Passion, Software Craftsmanship, culture of excellence, Clean Code Fluency in English (multicultural and international team) What Technical Skills You//'ll Develop C# .NET and/or Python Oracle, PostgreSQL AWS ELK (Elasticsearch, Logstash, Kibana) GIT, GitHub, TeamCity, Docker , Ansible
Posted 4 days ago
6.0 - 10.0 years
7 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Data Engineering, AirFlow, Fivetran, CI/CD using We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, ourvision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location- Remote,Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad,Mumbai, Hyderabad
Posted 4 days ago
3.0 - 5.0 years
40 - 45 Lacs
Hyderabad
Work from Office
Key Responsibilities: Build, maintain, update, and manage complex Tableau dashboards to provide actionable insights to business stakeholders. Perform ad-hoc data analysis using Python, SQL, and basic AWS Cloud skills. Collaborate with peer data analysts to collectively manage and optimize dashboards critical to business operations. Work closely with stakeholders to identify KPIs, develop reports, and ensure alignment with VG standards and leading practices. Analyse data from diverse sources to provide insights that drive decision-making processes. Navigate ambiguity and proactively design solutions that meet stakeholder needs while adhering to organizational goals. Mandatory Technical Skills: Proficiency in Tableau for creating and managing dashboards. Strong skills in SQL for querying and data extraction. Working knowledge of Python for data manipulation and analysis. Basic understanding of AWS Cloud concepts and tools to perform cloud-based data analysis. Preferred Skills: Familiarity with AWS services like S3, Redshift, or Athena is a plus. Experience in developing and maintaining KPI reports adhering to business standards. Strong problem-solving skills with the ability to work independently and collaboratively. Excellent communication and stakeholder management abilities.
Posted 4 days ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh
On-site
Title: Developer (AWS Engineer) Requirements: Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 4 days ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh
On-site
Title: Developer (AWS Engineer) Requirements: Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 4 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position - Technical Architect Location - Pune Experience - 6+ Years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. WHY SHOULD YOU JOIN US? With the agility of a start-up and the opportunities of an enterprise, every day at HashedIn, your work will make an impact that matters. So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion, collaboration, and high performance – HashedIn is the place to be! From learning to leadership, this is your chance to take your software engineering career to the next level. JOB TITLE - Technical Architect B.E/B.Tech, MCA, M.E/M.Tech graduate with 6 -10 Years of experience (This includes 4 years of experience as an application architect or data architect) • Java/Python/UI/DE • GCP/AWS/AZURE • Generative AI-enabled application design pattern knowledge is a value addition. • Excellent technical background with a breadth of knowledge across analytics, cloud architecture, distributed applications, integration, API design, etc • Experience in technology stack selection and the definition of solution, technology, and integration architectures for small to mid-sized applications and cloud-hosted platforms. • Strong understanding of various design and architecture patterns. • Strong experience in developing scalable architecture. • Experience implementing and governing software engineering processes, practices, tools, and standards for development teams. • Proficient in effort estimation techniques; will actively support project managers and scrum masters in planning the implementation and will work with test leads on the definition of an appropriate test strategy for the realization of a quality solution. • Extensive experience as a technology/ engineering subject matter expert i. e. high level • Solution definition, sizing, and RFI/RFP responses. • Aware of the latest technology trends, engineering processes, practices, and metrics. • Architecture experience with PAAS and SAAS platforms hosted on Azure AWS or GCP. • Infrastructure sizing and design experience for on-premise and cloud-hosted platforms. • Ability to understand the business domain & requirements and map them to technical solutions. • Outstanding interpersonal skills. Ability to connect and present to CXOs from client organizations. • Strong leadership, business communication consulting, and presentation skills. • Positive, service-oriented personality OVERVIEW OF THE ROLE: This role serves as a paradigm for the application of team software development processes and deployment procedures. Additionally, the incumbent actively contributes to the establishment of best practices and methodologies within the team. Craft & deploy resilient APIs, bridging cloud infrastructure & software development with seamless API design, development, & deployment • Works at the intersection of infrastructure and software engineering by designing and deploying data and pipeline management frameworks built on top of open-source components, including Hadoop, Hive, Spark, HBase, Kafka streaming, Tableau, Airflow, and other cloud-based data engineering services like S3, Redshift, Athena, Kinesis, etc. • Collaborate with various teams to build and maintain the most innovative, reliable, secure, and cost-effective distributed solutions. • Design and develop big data and real-time analytics and streaming solutions using industry-standard technologies. • Deliver the most complex and valuable components of an application on time as per the specifications. • Plays the role of a Team Lead, manages, or influences a large portion of an account or small project in its entirety, demonstrating an understanding of and consistently incorporating practical value with theoretical knowledge to make balanced technical decisions
Posted 4 days ago
5.0 years
0 Lacs
Andhra Pradesh, India
On-site
Data Engineer Must have 5+ years of experience in below mentioned skills. Must Have: Big Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora
Posted 4 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Job The Director Data Engineering will lead the development and implementation of a comprehensive data strategy that aligns with the organization’s business goals and enables data driven decision making. Roles and Responsibilitie s Build and manage a team of talented data managers and engineers with the ability to not only keep up with, but also pioneer, in this space Collaborate with and influence leadership to directly impact company strategy and direction Develop new techniques and data pipelines that will enable various insights for internal and external customers Develop deep partnerships with client implementation teams, engineering and product teams to deliver on major cross-functional measurements and testing Communicate effectively to all levels of the organization, including executives Provide success in partnering teams with dramatically varying backgrounds, from the highly technical to the highly creative Design a data engineering roadmap and execute the vision behind it Hire, lead, and mentor a world-class data team Partner with other business areas to co-author and co-drive strategies on our shared roadmap Oversee the movement of large amounts of data into our data lake Establish a customer-centric approach and synthesize customer needs Own end-to-end pipelines and destinations for the transfer and storage of all data Manage 3rd-party resources and critical data integration vendors Promote a culture that drives autonomy, responsibility, perfection and mastery. Maintain and optimize software and cloud expenses to meet financial goals of the company Provide technical leadership to the team in design and architecture of data products and drive change across process, practices, and technology within the organization Work with engineering managers and functional leads to set direction and ambitious goals for the Engineering department Ensure data quality, security, and accessibility across the organization Skills You Will Need 10+ years of experience in data engineering 5+ years of experience leading data teams of 30+ resources or more, including selection of talent planning / allocating resources across multiple geographies and functions. 5+ years of experience with GCP tools and technologies, specifically, Google BigQuery, Google cloud composer, Dataflow, Dataform, etc. Experience creating large-scale data engineering pipelines, data-based decision-making and quantitative analysis tools and software Experience with hands-on to code version control systems (git) Experience with CICD, data architectures, pipelines, quality, and code management Experience with complex, high volume, multi-dimensional data, based on unstructured, structured, and streaming datasets Experience with SQL and NoSQL databases Experience creating, testing, and supporting production software and systems Proven track record of identifying and resolving performance bottlenecks for production systems Experience designing and developing data lake, data warehouse, ETL and task orchestrating systems Strong leadership, communication, time management and interpersonal skills Proven architectural skills in data engineering Experience leading teams developing production-grade data pipelines on large datasets Experience designing a large data lake and lake house experience, managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model Experience with common data languages (e.g. Python, Scala) and data warehouses (e.g. Redshift, BigQuery, Snowflake, Databricks) Extensive experience on cloud tools and technologies - GCP preferred Experience managing real-time data pipelines Successful track record and demonstrated thought-leadership and cross-functional influence and partnership within an agile / water-fall development environment. Experience in regulated industries or with compliance frameworks (e.g., SOC 2, ISO 27001). Nice to have: HR services industry experience Experience in data science, including predictive modeling Experience leading teams across multiple geographies
Posted 4 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Company: DataRepo Private Limited Work Mode: Remote (Work From Home) Shift Timing: 6:00 PM – 2:00 AM IST (Strict) Salary: ₹35,000/month (Fixed for both roles) NDA Required Open Positions: Big Data & AWS Developer Experience Required: 3+ Years Key Skills: Strong in PySpark , Spark SQL , Hive , Airflow , and HDFS Hands-on with AWS services : EMR, Glue, Lambda, Athena, S3, Redshift, Step Functions, etc. Proficient in Python , SQL , MySQL , Oracle Familiar with CI/CD tools : GitHub, Jenkins, Jira Experience with data validation, anomaly detection , and schema checks Ability to work independently and manage high-volume data pipelines Data Science Engineer – Fraud Detection Experience Required: 7+ Years Key Skills: Expert in Python , SQL , and machine learning algorithms Hands-on with Databricks , Spark , and Azure ML Experience building fraud detection models for credit card transactions Familiar with real-time data streaming tools like Kafka Knowledge of drift detection , model monitoring , and retraining workflows Worked with cross-functional teams in fraud ops, compliance, and engineering Important Terms for Both Roles: Remote Night Shift : 6:00 PM to 2:00 AM IST ( strictly followed ) Salary : ₹35,000/month (fixed) Must use your own laptop and have a stable internet connection Signing of an NDA (Non-Disclosure Agreement) is mandatory You must not disclose your salary or internal project details to the client You will be represented by DataRepo Private Limited during the interview process The interview will have two rounds , and the client will make the final decision on selection
Posted 4 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JD - Data Engineer Pattern values data and the engineering required to take full advantage of it. As a Data Engineer at Pattern, you will be working on business problems that have a huge impact on how the company maintains its competitive edge. Essential Duties And Responsibilities Develop, deploy, and support real-time, automated, scalable data streams from a variety of sources into the data lake or data warehouse. Develop and implement data auditing strategies and processes to ensure data quality; identify and resolve problems associated with large-scale data processing workflows; implement technical solutions to maintain data pipeline processes and troubleshoot failures. Collaborate with technology teams and partners to specify data requirements and provide access to data. Tune application and query performance using profiling tools and SQL or other relevant query languages. Understand business, operations, and analytics requirements for data Build data expertise and own data quality for assigned areas of ownership Work with data infrastructure to triage issues and drive to resolution Required Qualifications Bachelor’s Degree in Data Science, Data Analytics, Information Management, Computer Science, Information Technology, related field, or equivalent professional experience Overall experience should be more than 4 + years 3+ years of experience working with SQL 3+ years of experience in implementing modern data architecture-based data warehouses 2+ years of experience working with data warehouses such as Redshift, BigQuery, or Snowflake and understand data architecture design Excellent software engineering and scripting knowledge Strong communication skills (both in presentation and comprehension) along with the aptitude for thought leadership in data management and analytics Expertise with data systems working with massive data sets from various data sources Ability to lead a team of Data Engineers Preferred Qualifications Experience working with time series databases Advanced knowledge of SQL, including the ability to write stored procedures, triggers, analytic/windowing functions, and tuning Advanced knowledge of Snowflake, including the ability to write and orchestrate streams and tasks Background in Big Data, non-relational databases, Machine Learning and Data Mining Experience with cloud-based technologies including SNS, SQS, SES, S3, Lambda, and Glue Experience with modern data platforms like Redshift, Cassandra, DynamoDB, Apache Airflow, Spark, or ElasticSearch Expertise in Data Quality and Data Governance Our Core Values Data Fanatics: Our edge is always found in the data Partner Obsessed: We are obsessed with partner success Team of Doers: We have a bias for action Game Changers: We encourage innovation About Pattern Pattern is the premier partner for global e-commerce acceleration and is headquartered in Utah's Silicon Slopes tech hub—with offices in Asia, Australia, Europe, the Middle East, and North America. Valued at $2 billion, Pattern has been named one of the fastest-growing tech companies in North America by Deloitte and one of the best-led companies in America by Inc. More than 100 global brands—like Nestle, Sylvania, Kong, Panasonic, and Sorel —rely on Pattern's global e-commerce acceleration platform to scale their business around the world. We place employee experience at the center of our business model and have been recognized as one of America's Most Loved Workplaces®. https://pattern.com/
Posted 4 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Hi All, Greeting! We have hirings at Gurugram Location for following role: Hands on in SQL and its Big Data variants (Hive-QL, Snowflake ANSI, Redshift SQL) Python and Spark and one or more of its API (PySpark, Spark SQL, Scala), Bash/Shell scripting Experience with Source code control - GitHub, VSTS etc. Knowledge and exposure to Big Data technologies Hadoop stack such as HDFS, Hive, Impala, Spark etc, and cloud Big Data warehouses - RedShift, Snowflake etc. Experience with UNIX command-line tools. Exposure to AWS technologies including EMR, Glue, Athena, Data Pipeline, Lambda, etc Understanding and ability to translate/physicalise Data Models (Star Schema, Data Vault 2.0 etc) Design, develop, test, deploy, maintain and improve software Develop flowcharts, layouts and documentation to identify requirements & solutions Skill: AWS+ SQL+ Python is Mandatory Experience- 4 to 12 years NOTE: _____Face to face______ interview Happening In Gurugram office on 2nd Augt- 2025--- <<<<<<>>>>>> NOTE: WE NEED PEOPLE WHO CAN JOIN BY September MAX. Apply at rashwinder.kaur@qmail.quesscorp.com
Posted 4 days ago
4.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description 4-6 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Roles & Responsibilities Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc.
Posted 4 days ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are hiring for Global IT Data Architect-Senior Manager for Gurgaon location for a leading management consulting firm. Exp : 12yrs and above Tech stack : Snowflake, Data Architect, Data Model, AWS, Certificate-Snowflake Work Mode : Hybrid. Pls share resumes on leeba@mounttalent.com Essential Education Minimum of a Bachelor's degree in Computer science, Engineering or a similar field Additional Certification in Data Management or cloud data platforms like Snowflake preferred Essential Experience & Job Requirements 12+ years of IT experience with major focus on data warehouse/database related projects Expertise in cloud databases like Snowflake, Redshift etc. Expertise in Data Warehousing Architecture; BI/Analytical systems; Data cataloguing; MDM etc Proficient in Conceptual, Logical, and Physical Data Modelling Proficient in documenting all the architecture related work performed. Proficient in data storage, ETL/ELT and data analytics tools like AWS Glue, DBT/Talend, FiveTran, APIs, Tableau, Power BI, Alteryx etc Experience in building Data Solutions to support Comp Benchmarking, Pay Transparency / Pay Equity and Total Rewards use cases preferred. Experience with Cloud Big Data technologies such as AWS, Azure, GCP and Snowflake a plus Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs) a plus Excellent written, oral communication and presentation skills to present architecture, features, and solution recommendations is a must
Posted 4 days ago
3.0 years
10 - 15 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities Partner with product managers, engineers, and business stakeholders to define KPIs and success metrics for Creator Success Create comprehensive dashboards and self-service analytics tools using QuickSight, Tableau, or similar BI platforms Perform deep-dive analysis on customer behavior, content performance, and livestream engagement patterns Design, build, and maintain robust ETL/ELT pipelines to process large volumes of streaming and batch data from Creator Success platform Develop and optimize data warehouses, data lakes, and real-time analytics systems using AWS services (Redshift, S3, Kinesis, EMR, Glue) Implement data quality frameworks and monitoring systems to ensure data accuracy and reliability Qualifications Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field 3+ years of experience in business intelligence/analytic roles with proficiency in SQL, Python, and/or Scala Strong experience with AWS cloud services (Redshift, S3, EMR, Glue, Lambda, Kinesis) Expertise in building and optimizing ETL pipelines and data warehousing solutions Proficiency with big data technologies (Spark, Hadoop) and distributed computing frameworks Experience with business intelligence tools (QuickSight, Tableau, Looker) and data visualization best practices High proficiency in SQL and Python Skills: aws lambda,quicksight,power bi,aws s3,aws,tableau,aws kinesis,etl,sql,aws redshift,scala,aws emr,business intelligence,hadoop,spark,aws glue,data warehousing,python
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough