Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Job Title:AWS Data Engineer Experience5-10 Years Location:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.
Posted 2 months ago
4.0 - 6.0 years
2 - 6 Lacs
Hyderabad, Pune, Gurugram
Work from Office
Job Title:Sr AWS Data Engineer Experience4-6 Years Location:Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : PySpark, Python, SQL, AWS Services - S3, Athena, Glue, EMR/Spark, Redshift, Lambda, Step Functions, IAM, CloudWatch.
Posted 2 months ago
7.0 - 12.0 years
2 - 6 Lacs
Chennai
Work from Office
Job Title:Database Administrator Experience7-14 Years Location:Chennai : We are looking for a highly skilled Database Administrator (DBA) to manage, maintain, and optimize our databases across multiple platforms. The ideal candidate will have extensive experience with AWS RDS, Microsoft SQL Server, and MongoDB, along with a strong understanding of database security, performance tuning, and high-availability architectures. This role is crucial in ensuring data integrity, security, and efficiency for our SaaS applications while meeting HIPAA and other healthcare compliance requirements. Key Responsibilities Database Management & Administration Design, configure, and maintain AWS RDS (PostgreSQL, MySQL, SQL Server), Microsoft SQL Server, and MongoDB databases. Ensure high availability, performance, and scalability of all databases. Implement backup and disaster recovery strategies, including point-in-time recovery (PITR) and failover mechanisms. Monitor and optimize database performance using tools like AWS CloudWatch, SQL Profiler, and MongoDB Atlas Performance Advisor. Manage database provisioning, patching, and version upgrades in production and non-production environments. Security & Compliance Enforce data security best practices, including encryption, access controls (IAM, RBAC), and compliance with HIPAA and other healthcare regulations. Perform regular security audits and vulnerability assessments using tools like AWS Security Hub and Tenable. Implement and maintain database auditing, logging, and monitoring to detect and prevent unauthorized access. Optimization & Automation Analyze and optimize query performance, indexing strategies, and database schema design. Automate database maintenance tasks using Terraform, AWS Lambda, PowerShell, or Python scripts. Work with DevOps to integrate CI/CD pipelines for database changes (e.g., Flyway, Liquibase). Optimize storage and resource utilization in AWS to reduce costs while maintaining performance. Collaboration & Support Work closely with DevOps, Engineering, and Security teams to ensure database reliability and security. Provide guidance and best practices to developers on database design, indexing, and query performance tuning. Support application teams with troubleshooting, query optimization, and data modeling. Participate in on-call rotation for database-related incidents and outages. Required Qualifications & Experience 5+ years of experience as a Database Administrator in a SaaS or cloud environment. Strong expertise in AWS RDS (PostgreSQL, MySQL, or SQL Server). Proficient in Microsoft SQL Server, including T-SQL, SSMS, and high-availability configurations. Experience with NoSQL databases like MongoDB (Atlas preferred). Deep understanding of performance tuning, query optimization, indexing strategies, and partitioning. Familiarity with Terraform, AWS CloudFormation, or other Infrastructure-as-Code (IaC) tools. Experience with backup and disaster recovery strategies in AWS and on-prem environments. Knowledge of database replication, clustering, and high-availability architectures. Proficiency in scripting (Python, PowerShell, Bash) for automation. Strong knowledge of security best practices (IAM, RBAC, data encryption, audit logging). Familiarity with healthcare compliance requirements (HIPAA, HITRUST) is a plus. Preferred Skills & Certifications AWS Certified Database – Specialty Microsoft CertifiedAzure Database Administrator Associate MongoDB Certified DBA Associate Experience with AI/ML-driven database performance optimization tools Exposure to data warehousing and analytics (Redshift, Snowflake, or BigQuery) Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties, or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without advance notice. Any changes may be for an indeterminate time frame. EEO Statement
Posted 2 months ago
0.0 - 5.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Job Title Snaplogic Experience 0-5Years Location Bangalore : Snaplogic
Posted 2 months ago
5.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Title:EMR_Spark SME Experience:5-10 Years Location:Bangalore : Technical Skills: 5+ years of experience in big data technologies with hands-on expertise in AWS EMR and Apache Spark. Proficiency in Spark Core, Spark SQL, and Spark Streaming for large-scale data processing. Strong experience with data formats (Parquet, Avro, JSON) and data storage solutions (Amazon S3, HDFS). Solid understanding of distributed systems architecture and cluster resource management (YARN). Familiarity with AWS services (S3, IAM, Lambda, Glue, Redshift, Athena). Experience in scripting and programming languages such as Python, Scala, and Java. Knowledge of containerization and orchestration (Docker, Kubernetes) is a plus. Architect and develop scalable data processing solutions using AWS EMR and Apache Spark. Optimize and tune Spark jobs for performance and cost efficiency on EMR clusters. Monitor, troubleshoot, and resolve issues related to EMR and Spark workloads. Implement best practices for cluster management, data partitioning, and job execution. Collaborate with data engineering and analytics teams to integrate Spark solutions with broader data ecosystems (S3, RDS, Redshift, Glue, etc.). Automate deployments and cluster management using infrastructure-as-code tools like CloudFormation, Terraform, and CI/CD pipelines. Ensure data security and governance in EMR and Spark environments in compliance with company policies. Provide technical leadership and mentorship to junior engineers and data analysts. Stay current with new AWS EMR features and Spark versions to recommend improvements and upgrades. Requirements and Skills Performance tuning and optimization of Spark jobs. Problem-solving skills with the ability to diagnose and resolve complex technical issues. Strong experience with version control systems (Git) and CI/CD pipelines. Excellent communication skills to explain technical concepts to both technical and non-technical audiences. Qualification: Education qualificationB.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college. Certifications: AWS Certified Solutions Architect – Associate/Professional AWS Certified Data Analytics – Specialty
Posted 2 months ago
12.0 - 20.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Title Senior Software Engineer Experience 12-20 Years Location Bangalore : Strong knowledge & hands-on experience in AWS Data Bricks Nice to have Worked in hp eco system (FDL architecture) Technically strong to help the team on any technical issues they face during the execution. Owns the end-to-end technical deliverables Hands on data bricks + SQL knowledge Experience in AWS S3, Redshift, EC2 and Lambda services Extensive experience in developing and deploying Bigdata pipelines Experience in Azure data lake Strong hands on in SQL development / Azure SQL and in-depth understanding of optimization and tuning techniques in SQL with Redshift Development in Notebooks (like Jupyter, DataBricks, Zeppelin etc) Development experience in Spark Experience in scripting language like python and any other programming language Roles and Responsibilities Candidate must have hands on experience in AWS Data Databricks Good development experience using Python/Scala, Spark SQL and Data Frames Hands-on with Databricks, Data Lake and SQL knowledge is a must. Performance tuning, troubleshooting, and debugging SparkTM Process Skills: Agile – Scrum Qualification: Bachelor of Engineering (Computer background preferred)
Posted 2 months ago
10.0 - 15.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Job Title:SQL, AWS Redshift, PostgreSQL Experience10-15 Years Location:Bangalore : SQL, AWS Redshift, PostgreSQL
Posted 2 months ago
5.0 - 7.0 years
2 - 5 Lacs
Pune
Work from Office
Job Title:Data Engineer Experience5-7Years Location:Pune : Roles & Responsibilities: Create and maintain optimal data pipeline architecture Build data pipelines that transform raw, unstructured data into formats that data analyst can use to for analysis Assemble large, complex data sets that meet functional / non-functional business requirements Identify, design, and implement internal process improvementsautomating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and delivery of data from a wide variety of data sources using SQL and AWS ‘Big Data’ technologies Work with stakeholders including the Executive, Product, and program teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems Develops and maintains scalable data pipelines and builds out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using HQL and 'Big Data' technologies Implements processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it Write unit/integration tests, contribute to engineering wiki, and document work Performs root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement Who You Are: You’re passionate about Data and building efficient data pipelines You have excellent listening skills and empathetic to others You believe in simple and elegant solutions and give paramount importance to quality You have a track record of building fast, reliable, and high-quality data pipelines Passionate with good understanding of data, with a focus on having fun, while delivering incredible business results Must have skills: AData Engineerwith 5+ years of relevant experience who is excited to apply their current skills and to grow their knowledge base. A Data Engineer who has attained a degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. Has experience using the following software/tools: Experience with big data tools:Hadoop, Spark, Kafka, Hive etc. Experience with relationalSQLandNoSQL databases, including Postgres and Cassandra Experience withdata pipelineandworkflow management tools Experience with AWS cloud services:EC2, EMR, RDS, Redshift Experience with object-oriented/object function scripting languages:Python, Java, Scala, etc. Experience withAirflow/Ozzie Experience inAWS/Spark/Python development Experience inGIT, JIRA, Jenkins, Shell scripting Familiar withAgile methodology,test-driven development, source control management and automated testing Build processes supporting data transformation, data structures, metadata, dependencies and workload management Experience supporting and working with cross-functional teams in a dynamic environment Nice to have skills: Experience with stream-processing systems:Storm, Spark-Streaming, etc. a plus Experience withSnowflake
Posted 2 months ago
5.0 - 10.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Job Title:Data Engineer Experience5-10 Years Location:Bangalore : Data Engineers with PySpark and AWS Glue experiences. AWS mandatory. GCP and Azure add-on Proven experience as a Data Engineer or similar role in data architecture, database management, and cloud technologies. Proficiency in programming languages such as Python, Java, or Scala. Strong experience with data processing frameworks like PYSpark, Apache Kafka, or Hadoop. Hands-on experience with data warehousing solutions such as Redshift, BigQuery, Snowflake, or similar platforms. Strong knowledge of SQL and relational databases (e.g., PostgreSQL, MySQL, etc.). Experience with version control tools like Git. Familiarity with containerization and orchestration tools like Docker, Kubernetes, and Airflow is a plus. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills and ability to collaborate with cross-functional teams. Certifications Needed Bachelor's or master’s degree in Computer Science, Information Systems, Engineering or equivalent.
Posted 2 months ago
10.0 - 12.0 years
9 - 13 Lacs
Chennai
Work from Office
Job Title Data Architect Experience 10-12 Years Location Chennai : 10-12 years experience as Data Architect Strong expertise in streaming data technologies like Apache Kafka, Flink, Spark Streaming, or Kinesis. ProficiencyinprogramminglanguagessuchasPython,Java,Scala,orGo ExperiencewithbigdatatoolslikeHadoop,Hive,anddatawarehousessuchas Snowflake,Redshift,Databricks,MicrosoftFabric. Proficiencyindatabasetechnologies(SQL,NoSQL,PostgreSQL,MongoDB,DynamoDB,YugabyteDB). Should be flexible to work as anIndividual contributor
Posted 2 months ago
4.0 - 9.0 years
9 - 13 Lacs
Mumbai
Work from Office
We are seeking a skilled Python Developer with expertise in Django, Flask, and API development to join our growing team. The Python Developer will be responsible for designing and implementing backend services, APIs, and integrations that power our core platform. The ideal candidate should have a strong foundation in Python programming, experience with Django and/or Flask frameworks, and a proven track record of delivering robust and scalable solutions. Primary Skill Responsibilities Design, develop, and maintain backend services and APIs using Python frameworks such as Django and Flask. Collaborate with front-end developers, product managers, and stakeholders to translate business requirements into technical solutions. Build and integrate RESTful APIs for seamless communication between our applications and external services. Qualifications Bachelors degree in computer science, Engineering, or related field; or equivalent experience. 5+ years of professional experience as a Python Developer, with a focus on backend development. Secondary Skill Amazon Elastic File System (EFS) Amazon Redshift Amazon S3 Apache Spark Ataccama DQ Analyzer AWS Apache Airflow AWS Athena Azure Data Factory Azure Data Lake Storage Gen2 (ADLS) Azure Databricks Azure Event Hub Azure Stream Analytics Azure Synapse Analytics BigID C++ Cloud Storage Collibra Data Governance (DG) Collibra Data Quality (DQ) Data Lake Storage Data Vault Modeling Databricks DataProc DDI Dimensional Data Modeling EDC AXON Electronic Medical Record (EMR) Extract, Transform & Load (ETL) Financial Services Logical Data Model (FSLDM) Google Cloud Platform (GCP) BigQuery Google Cloud Platform (GCP) Bigtable Google Cloud Platform (GCP) Dataproc HQL IBM InfoSphere Information Analyzer IBM Master Data Management (MDM) Informatica Data Explorer Informatica Data Quality (IDQ) Informatica Intelligent Data Management Cloud (IDMC) Informatica Intelligent MDM SaaS Inmon methodology Java Kimball Methodology Metadata Encoding & Transmission Standards (METS) Metasploit Microsoft Excel Microsoft Power BI NewSQL noSQL OpenRefine OpenVAS Performance Tuning Python R RDD Optimization SaS SQL Tableau Tenable Nessus TIBCO Clarity
Posted 2 months ago
4.0 - 9.0 years
11 - 16 Lacs
Pune
Work from Office
MS Azure Infra (Must), PaaS will be a plus, ensuring solutions meet regulatory standards and manage risk effectively. Hands-On Experience using Terraform to design and deploy solutions (at least 5+ years), adhering to best practices to minimize risk and ensure compliance with regulatory requirements. Primary Skill AWS Infra along with PaaS will be an added advantage. Certification in Terraform is an added advantage. Certification in Azure and AWS is an added advantage. Can handle large audiences to present HLD, LLD, and ERC. Able to drive Solutions/Projects independently and lead projects with a focus on risk management and regulatory compliance. Secondary Skills Amazon Elastic File System (EFS) Amazon Redshift Amazon S3 Apache Spark Ataccama DQ Analyzer AWS Apache Airflow AWS Athena Azure Data Factory Azure Data Lake Storage Gen2 (ADLS) Azure Databricks Azure Event Hub Azure Stream Analytics Azure Synapse Analytics BigID C++ Cloud Storage Collibra Data Governance (DG) Collibra Data Quality (DQ) Data Lake Storage Data Vault Modeling Databricks DataProc DDI Dimensional Data Modeling EDC AXON Electronic Medical Record (EMR) Extract, Transform & Load (ETL) Financial Services Logical Data Model (FSLDM) Google Cloud Platform (GCP) BigQuery Google Cloud Platform (GCP) Bigtable Google Cloud Platform (GCP) Dataproc HQL IBM InfoSphere Information Analyzer IBM Master Data Management (MDM) Informatica Data Explorer Informatica Data Quality (IDQ) Informatica Intelligent Data Management Cloud (IDMC) Informatica Intelligent MDM SaaS Inmon methodology Java Kimball Methodology Metadata Encoding & Transmission Standards (METS) Metasploit Microsoft Excel Microsoft Power BI NewSQL noSQL OpenRefine OpenVAS Performance Tuning Python R RDD Optimization SaS SQL Tableau Tenable Nessus TIBCO Clarity
Posted 2 months ago
3.0 - 6.0 years
15 - 25 Lacs
Chennai
Work from Office
Data Engineer with Kafka and Informatica Power Exchange Skills and Qualifications SQL - Mandatory Expertise in AWS services (e.g., S3, Glue, Redshift, Lambda). - Mandatory Proficiency in Kafka for real-time data streaming. - Mandatory Experience with Informatica PowerExchange CDC for data replication. - Mandatory Strong programming skills in Python, Java. - Mandatory Familiarity with orchestration tools like Apache Airflow and AWS Step Functions. – Nice to have Knowledge of ETL processes, and data warehousing. Understanding of data modeling ,data governance and security best practices. Job Summary We are seeking a skilled Developer with 3 to 6 years of experience to join our team. The ideal candidate will have expertise in Data Engineer with Kafka and Informatica +SQL This role offers a hybrid work model and operates during the day shift. The candidate will contribute to our projects by leveraging their technical skills to drive innovation and efficiency. Responsibilities Implement and manage continuous integration and continuous deployment (CI/CD) pipelines. Write clean maintainable and efficient code in Python. Design and optimize SQL queries for data retrieval and manipulation. Collaborate with cross-functional teams to define design and ship new features. Troubleshoot and resolve issues in development test and production environments. Ensure the performance quality and responsiveness of applications. Conduct code reviews to maintain code quality and share knowledge with team members. Automate repetitive tasks to improve efficiency and reduce manual effort. Monitor application performance and implement improvements as needed. Provide technical guidance and mentorship to junior developers. Stay updated with the latest industry trends and technologies to ensure our solutions remain cutting-edge. Contribute to the overall success of the team by meeting project deadlines and delivering high-quality work. Qualifications Must have strong experience in AWS DevOps including setting up and managing CI/CD pipelines. Should possess excellent programming skills in Python with a focus on writing clean and efficient code. Must be proficient in SQL with experience in designing and optimizing queries. Should have a good understanding of cloud computing concepts and services. Must have experience working in a hybrid work model and be adaptable to both remote and in-office environments. Should have strong problem-solving skills and the ability to troubleshoot complex issues. Must be a team player with excellent communication and collaboration skills. Should have a proactive attitude and be willing to take initiative in improving processes and systems. Must be detail-oriented and committed to delivering high-quality work. Should have experience with version control systems like Git. Must be able to work independently and manage time effectively. Should have a passion for learning and staying updated with new technologies and best practices.
Posted 2 months ago
3.0 - 6.0 years
15 - 25 Lacs
Chennai
Work from Office
Data Engineer Skills and Qualifications SQL - Mandatory Strong knowledge of AWS services (e.g., S3, Glue, Redshift, Lambda ). - Mandatory Experience working with DBT – Nice to have Proficiency in PySpark or Python for big data processing. - Mandatory Experience with orchestration tools like Apache Airflow and AWS CodePipeline . - Mandatory Job Summary We are seeking a skilled Developer with 3 to 6 years of experience to join our team. The ideal candidate will have expertise in AWS DevOps Python and SQL. This role involves working in a hybrid model with day shifts and no travel requirements. The candidate will contribute to the companys purpose by developing and maintaining high-quality software solutions. Responsibilities Develop and maintain software applications using AWS DevOps Python and SQL. Collaborate with cross-functional teams to design and implement new features. Ensure the scalability and reliability of applications through effective coding practices. Monitor and optimize application performance to meet user needs. Provide technical support and troubleshooting for software issues. Implement security best practices to protect data and applications. Participate in code reviews to maintain code quality and consistency. Create and maintain documentation for software applications and processes. Stay updated with the latest industry trends and technologies to enhance skills. Work in a hybrid model balancing remote and in-office work as needed. Communicate effectively with team members and stakeholders to ensure project success. Contribute to the continuous improvement of development processes and methodologies. Ensure timely delivery of projects while maintaining high-quality standards. Qualifications Possess a strong understanding of AWS DevOps including experience with deployment and management of applications on AWS. Demonstrate proficiency in Python programming with the ability to write clean and efficient code. Have experience with SQL for database management and querying. Show excellent problem-solving skills and attention to detail. Exhibit strong communication and collaboration skills. Be adaptable to a hybrid work model and able to manage time effectively.
Posted 2 months ago
8.0 - 13.0 years
5 - 12 Lacs
Mysuru, Pune
Hybrid
Role & responsibilities 4+ years of experience as a Data Engineer or similar role - 3+ of years experience building data solutions at scale using one of the Enterprise Data platforms – Palantir Foundry, Snowflake, Cloudera/Hive, Amazon Redshift - 3+ years of experience with SQL and No-SQL databases (Snowflake or Hive) - 3+ years of hands-on experience with programming using Python, Spark or C# - Experience with DevOps principals and CI/CD - Strong understanding of ETL principles and data integration patterns - Experience with Agile and iterative development processes is a plus - Experience with cloud services such as AWS, Azure etc. and other big data tools like Spark, Kafka, etc. is a plus (not mandatory) - Knowledge of Typescript & Full stack development experience is a plus (not mandatory)
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Cloud Architect to design and oversee scalable, secure, and cost-efficient cloud solutions. Great for architects who bridge technical vision with business needs. Key Responsibilities: Design cloud-native solutions using AWS, Azure, or GCP Lead cloud migration and transformation projects Define cloud governance, cost control, and security strategies Collaborate with DevOps and engineering teams for implementation Required Skills & Qualifications: Deep expertise in cloud architecture and multi-cloud environments Experience with containers, serverless, and microservices Proficiency in Terraform, CloudFormation, or equivalent Bonus: Cloud certification (AWS/Azure/GCP Architect) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
1.0 - 4.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Job Area: Miscellaneous Group, Miscellaneous Group > Data Analyst Qualcomm Overview: Qualcomm is a company of inventors that unlocked 5G ushering in an age of rapid acceleration in connectivity and new possibilities that will transform industries, create jobs, and enrich lives. But this is just the beginning. It takes inventive minds with diverse skills, backgrounds, and cultures to transform 5Gs potential into world-changing technologies and products. This is the Invention Age - and this is where you come in. General Summary: About the Team Qualcomm's People Analytics team plays a crucial role in transforming data into strategic workforce insights that drive HR and business decisions. As part of this lean but high-impact team, you will have the opportunity to analyze workforce trends, ensure data accuracy, and collaborate with key stakeholders to enhance our data ecosystem. This role is ideal for a generalist who thrives in a fast-paced, evolving environment"”someone who can independently conduct data analyses, communicate insights effectively, and work cross-functionally to enhance our People Analytics infrastructure. Why Join Us? End-to-End Impact Work on the full analytics cycle"”from data extraction to insight generation"”driving meaningful HR and business decisions. Collaboration at Scale Partner with HR leaders, IT, and other analysts to ensure seamless data integration and analytics excellence. Data-Driven Culture Be a key player in refining our data lake, ensuring data integrity, and influencing data governance efforts. Professional Growth Gain exposure to multiple areas of people analytics, including analytics, storytelling, and stakeholder engagement. Key Responsibilities People Analytics & Insights Analyze HR and workforce data to identify trends, generate insights, and provide recommendations to business and HR leaders. Develop thoughtful insights to support ongoing HR and business decision-making. Present findings in a clear and compelling way to stakeholders at various levels, including senior leadership. Data Quality & Governance Ensure accuracy, consistency, and completeness of data when pulling from the data lake and other sources. Identify and troubleshoot data inconsistencies, collaborating with IT and other teams to resolve issues. Document and maintain data definitions, sources, and reporting standards to drive consistency across analytics initiatives. Collaboration & Stakeholder Management Work closely with other analysts on the team to align methodologies, share best practices, and enhance analytical capabilities. Act as a bridge between People Analytics, HR, and IT teams to define and communicate data requirements. Partner with IT and data engineering teams to improve data infrastructure and expand available datasets. Qualifications Required4-7 years experience in a People Analytics focused role Analytical & Technical Skills Strong ability to analyze, interpret, and visualize HR and workforce data to drive insights. Experience working with large datasets and ensuring data integrity. Proficiency in Excel and at least one data visualization tool (e.g., Tableau, Power BI). Communication & Stakeholder Management Ability to communicate data insights effectively to both technical and non-technical audiences. Strong documentation skills to define and communicate data requirements clearly. Experience collaborating with cross-functional teams, including HR, IT, and business stakeholders. Preferred: Technical Proficiency Experience with SQL, Python, or R for data manipulation and analysis. Familiarity with HR systems (e.g., Workday) and cloud-based data platforms. People Analytics Expertise Prior experience in HR analytics, workforce planning, or related fields. Understanding of key HR metrics and workforce trends (e.g., turnover, engagement, diversity analytics). Additional Information This is an office-based position (4 days a week onsite) with possible locations that may include India and Mexico Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail myhr.support@qualcomm.com or call Qualcomm's toll-free number found here . Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies: Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 2 months ago
4.0 - 9.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Minimum Qualifications: - BA/BSc/B.E./BTech degree from Tier I, II college in Computer Science, Statistics, Mathematics, Economics or related fields - 1to 4 years of experience in working with data and conducting statistical and/or numerical analysis - Strong understanding of how data can be stored and accessed in different structures - Experience with writing computer programs to solve problems - Strong understanding of data operations such as sub-setting, sorting, merging, aggregating and CRUD operations - Ability to write SQL code and familiarity with R/Python, Linux shell commands - Be willing and able to quickly learn about new businesses, database technologies and analysis techniques - Ability to tell a good story and support it with numbers and visuals - Strong oral and written communication Preferred Qualifications: - Experience working with large datasets - Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3) - Experience building analytics applications leveraging R, Python, Tableau, Looker or other - Experience in geo-spatial analysis with POSTGIS, QGIS Apply Save Save Pro Insights
Posted 2 months ago
5.0 - 7.0 years
4 - 8 Lacs
Bengaluru
Work from Office
We are seeking an experienced SQL Developer with expertise in SQL Server Analysis Services (SSAS) and AWS to join our growing team. The successful candidate will be responsible for designing, developing, and maintaining SQL Server-based OLAP cubes and SSAS models for business intelligence purposes. You will work with multiple data sources, ensuring data integration, optimization, and performance of the reporting models. This role offers an exciting opportunity to work in a hybrid work environment, collaborate with cross-functional teams, and apply your skills in SSAS, SQL, and AWS to design scalable and high-performance data solutions. Key Responsibilities : - Design, develop, and maintain SQL Server Analysis Services (SSAS) models, including both Multidimensional and Tabular models, to support business intelligence and reporting solutions. - Create and manage OLAP cubes that are optimized for fast query performance and used for analytical reporting and decision-making. - Develop and implement multidimensional and tabular data models for various business needs, ensuring the models are flexible, scalable, and optimized for reporting. - Work on performance tuning and optimization of SSAS solutions, ensuring efficient query processing and high performance even with large data sets. - Integrate data from various sources, including SQL Server databases, flat files, and cloud-based storage, into SSAS models for seamless and accurate reporting. - Integrate and manage data from AWS services (e.g., S3, Redshift, etc.) into the SQL Server database and SSAS models for hybrid cloud and on-premise data solutions. - Leverage SQL Server PolyBase to access and integrate data from external data sources like AWS S3, Azure Blob Storage, or other systems for data processing. - Ensure data integrity, consistency, and accuracy within the data models and reporting systems. Work closely with data governance teams to maintain high-quality data standards. - Work in an agile team environment with BI developers, data engineers, and business analysts to align data models and solutions with business requirements. - Provide support for production systems, troubleshoot issues with SSAS models, queries, and reporting solutions, and implement fixes when necessary. - Maintain clear and detailed technical documentation for SSAS model designs, ETL processes, and best practices for data integration. Required Skills & Experience : - 5+ years of experience as a SQL Developer with strong hands-on expertise in SSAS. - In-depth experience in creating and managing SSAS models, including multidimensional (OLAP) and tabular models. - Proficiency in SQL Server (T-SQL, SSIS, SSRS) for data integration, data transformation, and reporting. - Strong understanding of SSAS performance tuning, query optimization, and processing. - Experience with AWS services, particularly AWS S3, AWS Redshift, and their integration with SQL Server-based solutions. - Knowledge of SQL Server PolyBase for data integration and access from external data sources. - Experience in business intelligence solutions and creating reports using tools like Power BI or SSRS. - Familiarity with cloud data integration, ensuring seamless integration between on-premise SQL Server databases and cloud-based storage (AWS). - Strong problem-solving skills and the ability to troubleshoot and resolve issues in data models and data warehouses. - Excellent communication skills, both verbal and written, with the ability to effectively collaborate with cross-functional teams. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 2 months ago
7.0 - 12.0 years
20 - 35 Lacs
Kolkata, Hyderabad, Bengaluru
Work from Office
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of a Principal Consultant-AWS Developer! We are looking for candidates who have a passion for cloud with knowledge of different cloud environments. Ideal candidates should have technical experience in AWS Platform Services – IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc . This key role demands a highly motivated individual with a strong background in Computer Science/ Software Engineering. You are meticulous, thorough and possess excellent communication skills to engage with all levels of our stakeholders. A self-starter, you are up-to-speed with the latest developments in the tech world. Responsibilities Data Storage & Lake Management: Expertise in S3 (data lake design, partitioning, optimization), Glue Catalog (schema/version management), and Lake Formation (access control). Data Processing: Hands-on with AWS Glue (ETL with PySpark/Scala), EMR (Spark/Hadoop), Lambda (event-driven ETL), and Athena (S3 querying and optimization). Data Ingestion: Experience with Kinesis (real-time streaming), DMS (database migration), and Amazon MSK (Kafka-based ingestion). Databases & Warehousing: Proficient in Redshift (data warehousing), DynamoDB (NoSQL), and RDS (PostgreSQL/MySQL). Overall Data Engineering Concepts: Strong grasp of data modeling (star/snowflake, data vault), file formats (Parquet, Avro, etc.), S3 partitioning/bucketing, and ETL/ELT best practices Hands-On experience & good skills on AWS Platform Services – IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc. Must have good working knowledge on Kubernetes & Dockers. Utilize AWS services such as Amazon Glue, Amazon S3, AWS Lambda, and others to optimize performance, reliability, and cost-effectiveness. Design, develop, and maintain AWS-based applications, ensuring high performance, scalability, and security. Integrate AWS services into application architecture, leveraging tools such as Lambda, API Gateway, S3, DynamoDB, and RDS. Collaborate with DevOps teams to automate deployment pipelines and optimize CI/CD practices. Develop scripts and automation tools to manage cloud environments efficiently. Monitor, troubleshoot, and resolve application performance issues. Implement best practices for cloud security, data management, and cost optimization. Participate in code reviews and provide technical guidance to junior developers. Qualifications we seek in you! Minimum Qualifications / Skills Experience in software development with a focus on AWS technologies. Proficiency in AWS services such as EC2, Lambda, S3, RDS, and DynamoDB. Strong programming skills in Python, Node.js, or Java. Experience with RESTful APIs and microservices architecture. Familiarity with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline. Knowledge of infrastructure as code using CloudFormation or Terraform. Problem-solving skills and the ability to troubleshoot application issues in a cloud environment. Excellent teamwork and communication skills. Preferred Qualifications/ Skills AWS Certified Developer – Associate or AWS Certified Solutions Architect – Associate. Experience with serverless architectures and API development. Familiarity with Agile development practices. Knowledge of monitoring and logging solutions like CloudWatch and ELK Stack. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 2 months ago
3.0 - 5.0 years
3 - 7 Lacs
Mumbai
Work from Office
Job Summary We are seeking a highly analytical and detail-oriented Data Specialist with deep expertise in SQL, Python, statistics, and automation. The ideal candidate will be responsible for designing robust data pipelines, analyzing large datasets, driving insights through statistical methods, and automating workflows to enhance data accessibility and business decision-making. Key Responsibilities - Write and optimize complex SQL queries for data extraction, transformation, and reporting. - Develop and maintain Python scripts for data analysis, ETL processes, and automation tasks. - Conduct statistical analysis to identify trends, anomalies, and actionable insights. - Build and manage automated dashboards and data pipelines using tools such as Airflow, Pandas, or Apache Spark. - Collaborate with cross-functional teams (product, engineering, business) to understand data needs and deliver scalable solutions. - Implement data quality checks and validation procedures to ensure accuracy and consistency. - Support machine learning model deployment and performance tracking (if applicable). - Document data flows, models, and processes for internal knowledge sharing. Key Requirements - Strong proficiency in SQL (joins, CTEs, window functions, performance tuning). - Solid experience with Python (data manipulation using Pandas, NumPy, scripting, and automation). - Applied knowledge of statistics (hypothesis testing, regression, probability, distributions). - Experience with data automation tools (Airflow, dbt, or equivalent). - Familiarity with data visualization tools (Tableau, Power BI, or Plotly) is a plus. - Understanding of data warehousing concepts (e.g., Snowflake, BigQuery, Redshift). - Strong problem-solving skills and the ability to work independently. Preferred Qualifications - Bachelor's or Masters degree in Computer Science, Data Science, Statistics, or a related field. - Exposure to cloud platforms like AWS, GCP, or Azure. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 2 months ago
4.0 - 6.0 years
6 - 8 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Category: Technology Location: Shuru is a technology-consulting company that embeds senior product and engineering teams into fast-growing companies worldwide to accelerate growth and de-risk strategy Our work is global, high-stakes, and unapologetically business-first, Role Overview Youll join a lean, senior-only business intelligence team as a Senior Data Analyst who will sit shoulder-to-shoulder with our clients, operating as their in-house analytics brain-trust Your mandate: design the data questions worth asking, own the pipelines that answer them, and convert findings into clear, bottom-line actions If you need daily direction, this isnt for you If you see a vague brief as oxygen, read on, Key Responsibilities Frame the right questions Translate ambiguous product or commercial goals into testable hypotheses, selecting the metrics that truly explain user behaviour and unit economics, Own data end-to-end Model, query, and transform data in SQL and dbt, pushing to cloud warehouses such as Snowflake/BigQuery, with zero babysitting, Build self-service BI Deliver dashboards in Metabase/Looker that non-technical stakeholders can tweak without coming back to you every week, Tell unforgettable stories Turn complex analyses into visuals and narratives that drive decisions in the C-suite and on the sprint board, Guard the data moat Champion data governance, privacy, and quality controls that scale across multiple client engagements, Mentor & multiply Level-up engineers and product managers on analytical thinking, setting coding and insight standards for future analysts, Requirements Must-Have Skills & Experience Minimum Experience of 3 years Core Analytics: Expert SQL; comfort with Python or R for advanced analysis; solid grasp of statistical inference and experimentation, Modern Data Stack: Hands-on with dbt, Snowflake/BigQuery/Redshift, and at least one orchestration tool (Airflow, Dagster, or similar), BI & Visualisation: Proven delivery in Metabase, Looker, or Tableau (including performance tuning for big data models ) Product & Growth Metrics: Demonstrated ability to define retention, activation, and LTV/Payback KPI for SaaS or consumer-tech products, Communication: Relentless clarity; you can defend an insight to both engineers and the CFO, and change course when the data disproves you, Independence: History of thriving with ?figure it out? briefs and distributed teams across time zones, Bonus Points Feature-flag experimentation at scale (e-g , Optimizely, LaunchDarkly), Familiarity with privacy-enhancing tech (differential privacy, data clean rooms), Benefits Work on international projects Execute with founders and execs from around the globe, stacking your playbook fast, Regular team outings We fund quarterly off-sites and virtual socials to keep the remote vibe human, Collaborative & growth-oriented Learn directly from CXOs, leads, and seasoned PMs; no silos, no artificial ceilings, Competitive salary & benefits Benchmark ?90th percentile for similar-stage firms, plus performance upside, Details
Posted 2 months ago
8.0 - 13.0 years
25 - 30 Lacs
Chennai
Work from Office
Job Title: Data Engineer Experience: 6-7 Years Location: Chennai (Hybrid) Key Skills: Python, PySpark, AWS (S3, Lambda, Glue, EMR, Redshift), SQL, Snowflake, DBT, MongoDB, Kafka, Airflow Job Description: Virtusa is hiring a Senior Data Engineer with expertise in building scalable data pipelines using Python, PySpark, and AWS services The role involves data modeling in Snowflake, ETL development with DBT, and orchestration via Airflow Experience with MongoDB, Kafka, and data streaming is essential, Responsibilities: Develop and optimize data pipelines using PySpark & Python Leverage AWS for data ingestion and processing Manage Snowflake data models and transformations via DBT Work with SQL across multiple databases Integrate streaming and NoSQL sources (Kafka, MongoDB) Support analytics and ML workflows Maintain data quality, lineage, and governance
Posted 2 months ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer (35 Years Experience) Location: Gurgaon, Pune, Bangalore, Hyderabad Job Summary : We are seeking a skilled and motivated Data Engineer with 3 to 5 years of experience to join our growing team The ideal candidate will have hands-on expertise in building robust, scalable data pipelines, working with modern data platforms, and enabling data-driven decision-making across the organization Youll work closely with data scientists, analysts, and engineering teams to build and maintain efficient data infrastructure and toolin Key Responsibiliti es: Design, develop, and maintain scalable ETL/ELT pipelines to support analytics and product use ca ses Collaborate with data analysts, scientists, and business stakeholders to gather requirements and translate them into data soluti ons Manage data integrations from various internal and external data sour ces Optimize data workflows for performance, cost-efficiency, and reliabil ity Build and maintain data models and data warehouses using industry best practi ces Monitor, troubleshoot, and improve existing data pipeli nes Implement data quality frameworks and ensure data governance standards are follo wed Contribute to documentation, code reviews, and knowledge sharing within the t eam Required Skills & Qualificati ons: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related f ield 35 years of experience as a Data Engineer or in a similar data-focused role Strong command of SQL and proficiency in Py thon Good Engineering pract ices Experience with data pipeline orchestration tools such as Apache Airflow or equiva lent Hands-on experience with cloud data platforms (AWS/GCP/Azure) and services such as S3, Redshift, BigQuery, or Azure Data Lake Experience in data warehousing concepts and tools like Snowflake, Redshift, databr icks,Familiarity with version control tools such as Git Strong analytical and communication sk ills Preferred Qualificat ions: Exposure to big data tools and frameworks such as Spark, Hadoop, or Kafka Experience with containerization (Docker/Kubern etes) Familiarity with CI/CD pipelines and automation in data engine ering Awareness of data security, privacy, and compliance princ iples What We Offer: A collaborative and inclusive work envir onment Opportunities for continuous learning and career growth petitive compensation and be nefits Flexibility to work from any of our offices in Gurgaon, Pune, Bangalore, or Hy derabad
Posted 2 months ago
1.0 - 5.0 years
3 - 7 Lacs
Coimbatore
Work from Office
About Responsive Responsive, formerly RFPIO, is the market leader in an emerging new category of SaaS solutions called Strategic Response Management Responsive customers including Google, Microsoft, Blackrock, T Rowe Price, Adobe, Amazon, Visa and Zoom are using Responsive to manage business critical responses to RFPs, RFIs, RFQs, security questionnaires, due diligence questionnaires and other requests for information Responsive has nearly 2,000 customers of all sizes and has been voted ?best in class? by G2 for 13 quarters straight It also has more than 35% of the cloud SaaS leaders as customers, as well as more than 15 of the Fortune 100 Customers have used Responsive to close more than $300B in transactions to-date, About The Role We are seeking a Product Data Visualization Analyst with 2+ years of experience in data visualization and dashboard development The ideal candidate should be proficient in designing, developing, and optimizing Tableau dashboards to transform complex data into meaningful insights This role requires a deep understanding of data storytelling, interactive visualizations, and dashboard performance optimization to support business decision-making, Essential Functions Dashboard Development: Design and develop interactive, visually compelling, and user-friendly dashboards in Tableau to support business insights, Data Visualization Best Practices: Apply best practices in data visualization, color theory, and UI/UX design to enhance readability and user experience, Performance Optimization: Optimize Tableau dashboards by improving load times, managing extracts, and refining calculations for efficiency, Data Blending & Relationships: Work with multiple data sources (SQL databases, spreadsheets, cloud platforms, APIs) to create blended datasets and establish relationships, Advanced Calculations & Analytics: Use Tableau calculations (LOD, table calculations, parameters, sets, and groups) to build advanced analytical insights, Collaboration with Teams: Work closely with data analysts, business teams, and stakeholders to understand requirements and translate them into actionable dashboards, Data Quality & Governance: Ensure data accuracy, integrity, and governance while working with different data sources, Automation & Efficiency: Create automated reports and alerts using Tableau Server or Tableau Prep to streamline business processes, Training & Documentation: Document dashboard usage and provide training to end users on self-service analytics in Tableau, Education Bachelors degree in Computer Science, Data Analytics, Statistics, Business Intelligence, or a related field, Experience 2+ years of hands-on experience in Tableau development and data visualization, Experience with Tableau Prep for data cleaning and transformation, Familiarity with cloud-based data sources (Snowflake, Google BigQuery, AWS Redshift, etc ), Experience with Python or R for additional data analysis, Proficiency in Tableau: Strong experience in building, optimizing, and maintaining dashboards using Tableau Desktop and Tableau Server, SQL Knowledge: Ability to write queries, joins, and aggregations for data extraction from relational databases, Data Storytelling: Ability to present complex data in a clear and engaging way for different stakeholders, Dashboard Performance Tuning: Experience in optimizing extracts, calculations, and filters for smooth dashboard performance, Data Blending & Relationships: Experience integrating multiple datasets from different sources to create meaningful insights, Attention to Detail: Strong focus on data accuracy, visualization consistency, and UI design principles, Knowledge, Skills & Ability Knowledge of Tableau Server administration and dashboard security settings, Strong problem-solving skills and ability to translate business requirements into impactful dashboards, Excellent communication skills for interacting with business stakeholders,
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough