Jobs
Interviews

745 Amazon Redshift Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

2 - 5 Lacs

Mumbai, Chennai

Hybrid

Required Skills : Working knowledge of Big Data / cloud-based / columnar databases (such as Snowflake, AWS Redshift, Bigquery etc.) Experience in scripting languages like Python, R,Spark, Perl, etc. Very good knowledge of ETL concepts Experience in ETL tools like SSIS, Talend, Pentaho, etc., is a plus Very good knowledge of SQL querying Rich experience in relational data modelling Experience in developing logical, physical, and conceptual data models. Experience in AWS & Google cloud Services is a plus Experience with BI dashboard is plus Implementing automated testing platforms and unit tests Proficient understanding of code versioning tools, such as Git Strong analytical skills and problem-solving aptitude. Ability to learn new technologies quickly. Work with other team members in collaborative manner. Passionate to learn & work on versatile technologies Notice Period : Immediate 15 days

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Mumbai

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Responsibilities: Design and implement the data modeling, data ingestion and data processing for various datasets Design, develop and maintain ETL Framework for various new data source Develop data ingestion using AWS Glue/ EMR, data pipeline using PySpark, Python and Databricks. Build orchestration workflow using Airflow & databricks Job workflow Develop and execute adhoc data ingestion to support business analytics. Proactively interact with vendors for any questions and report the status accordingly Explore and evaluate the tools/service to support business requirement Ability to learn to create a data-driven culture and impactful data strategies. Aptitude towards learning new technologies and solving complex problem. Qualifications : Minimum of bachelors degree. Preferably in Computer Science, Information system, Information technology. Minimum 5 years of experience on cloud platforms such as AWS, Azure, GCP. Minimum 5 year of experience in Amazon Web Services like VPC, S3, EC2, Redshift, RDS, EMR, Athena, IAM, Glue, DMS, Data pipeline & API, Lambda, etc. Minimum of 5 years of experience in ETL and data engineering using Python, AWS Glue, AWS EMR /PySpark and Airflow for orchestration. Minimum 2 years of experience in Databricks including unity catalog, data engineering Job workflow orchestration and dashboard generation based on business requirements Minimum 5 years of experience in SQL, Python, and source control such as Bitbucket, CICD for code deployment. Experience in PostgreSQL, SQL Server, MySQL & Oracle databases. Experience in MPP such as AWS Redshift, AWS EMR, Databricks SQL warehouse & compute cluster. Experience in distributed programming with Python, Unix Scripting, MPP, RDBMS databases for data integration Experience building distributed high-performance systems using Spark/PySpark, AWS Glue and developing applications for loading/streaming data into Databricks SQL warehouse & Redshift. Experience in Agile methodology Proven skills to write technical specifications for data extraction and good quality code. Experience with big data processing techniques using Sqoop, Spark, hive is additional plus Experience in data visualization tools including PowerBI, Tableau. Nice to have experience in UI using Python Flask framework anglular Mandatory Skills: Python for Insights Experience : 5-8 Years.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Pune

Work from Office

Sr Data Engineer2 We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insightsorganization to build data solutions, design and implement ETL/ELT processes and manage our dataplatform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, ourvision is to spearhead technology and data-led solutions and experiences to drive growth & innovation atscale.The ideal candidate will have a strong Data Engineering background, advanced Python knowledge andexperience with cloud services and SQL/NoSQL databases.You will work closely with our cross functional stakeholders in Product, Finance and GTM along withBusiness and Enterprise Technology teams.As a Senior Data Engineer, you willCollaborating closely with various stakeholders to prioritize requests, identify improvements, andoffer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involvesconstructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineeringgroups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet businessneeds. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You''ll be a great addition to the team if you haveHold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintainingdata environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes,managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing,monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift,MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD usingGitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with variousbusiness stakeholders and translate requirements. Added bonus if you also haveA good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data

Posted 2 weeks ago

Apply

8.0 - 12.0 years

15 - 30 Lacs

Gurugram

Work from Office

Role description Lead and mentor a team of data engineers to design, develop, and maintain high-performance data pipelines and platforms. Architect scalable ETL/ELT processes, streaming pipelines, and data lake/warehouse solutions (e.g., Redshift, Snowflake, BigQuery). Own the roadmap and technical vision for the data engineering function, ensuring best practices in data modeling, governance, quality, and security. Drive adoption of modern data stack tools (e.g., Airflow, Kafka, Spark etc.) and foster a culture of continuous improvement. Ensure the platform is reliable, scalable, and cost-effective across batch and real-time use cases. Champion data observability, lineage, and privacy initiatives to ensure trust in data across the org. Skills Bachelors or Masters degree in Computer Science, Engineering, or related technical field. 8+ years of hands-on experience in data engineering with at least 2+ years in a leadership or managerial role. Proven experience with distributed data processing frameworks such as Apache Spark, Flink, or Kafka. Strong SQL skills and experience in data modeling, data warehousing, and schema design. Proficiency with cloud platforms (AWS/GCP/Azure) and their native data services (e.g., AWS Glue, Redshift, EMR, BigQuery). Solid grasp of data architecture, system design, and performance optimization at scale. Experience working in an agile development environment and managing sprint-based delivery cycles.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Mumbai

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreement Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: Informatica MDM. Experience: 5-8 Years.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru, Karnataka

Work from Office

Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects

Posted 2 weeks ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Experience: Minimum of 10+ years in database development and management roles. SQL Mastery: Advanced expertise in crafting and optimizing complex SQL queries and scripts. AWS Redshift: Proven experience in managing, tuning, and optimizing large-scale Redshift clusters. PostgreSQL: Deep understanding of PostgreSQL, including query planning, indexing strategies, and advanced tuning techniques. Data Pipelines: Extensive experience in ETL development and integrating data from multiple sources into cloud environments. Cloud Proficiency: Strong experience with AWS services like ECS, S3, KMS, Lambda, Glue, and IAM. Data Modeling: Comprehensive knowledge of data modeling techniques for both OLAP and OLTP systems. Scripting: Proficiency in Python, C#, or other scripting languages for automation and data manipulation. Preferred Qualifications Leadership: Prior experience in leading database or data engineering teams. Data Visualization: Familiarity with reporting and visualization tools like Tableau, Power BI, or Looker. DevOps: Knowledge of CI/CD pipelines, infrastructure as code (e.g., Terraform), and version control (Git). Certifications: Any relevant certifications (e.g., AWS Certified Solutions Architect, AWS Certified Database - Specialty, PostgreSQL Certified Professional) will be a plus. Azure Databricks: Familiarity with Azure Databricks for data engineering and analytics workflows will be a significant advantage. Soft Skills Strong problem-solving and analytical capabilities. Exceptional communication skills for collaboration with technical and non-technical stakeholders. A results-driven mindset with the ability to work independently or lead within a team. Qualification: Bachelor's or masters degree in Computer Science, Information Systems, Engineering or equivalent. 10+ years of experience

Posted 2 weeks ago

Apply

6.0 - 10.0 years

13 - 18 Lacs

Mumbai

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a skilled Data Engineer to design, build, and maintain scalable, secure, and high-performance data solutions. This role spans the full data engineering lifecycle – from research and architecture to deployment and support- within cloud-native environments, with a strong focus on AWS and Kubernetes (EKS). Primary Responsibilities: Data Engineering Lifecycle: Lead research, proof of concept, architecture, development, testing, deployment, and ongoing maintenance of data solutions Data Solutions: Design and implement modular, flexible, secure, and reliable data systems that scale with business needs Instrumentation and Monitoring: Integrate pipeline observability to detect and resolve issues proactively Troubleshooting and Optimization: Develop tools and processes to debug, optimize, and maintain production systems Tech Debt Reduction: Identify and address legacy inefficiencies to improve performance and maintainability Debugging and Troubleshooting: Quickly diagnose and resolve unknown issues across complex systems Documentation and Governance: Maintain clear documentation of data models, transformations, and pipelines to ensure security and governance compliance Cloud Expertise: Leverage advanced skills in AWS and EKS to build, deploy, and scale cloud-native data platforms Cross-Functional Support: Collaborate with analytics, application development, and business teams to enable data-driven solutions Team Leadership: Lead and mentor engineering teams to ensure operational efficiency and innovation Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Computer Science or related field 5+ years of experience in data engineering or related roles Proven experience designing and deploying scalable, secure, high-quality data solutions Solid expertise in full Data Engineering lifecycle (research to maintenance) Advanced AWS and EKS knowledge Proficient in CI/CD, IaC, and addressing tech debt Proven skilled in monitoring and instrumentation of data pipelines Proven advanced troubleshooting and performance optimization abilities Proven ownership mindset with ability to manage multiple components Proven effective cross-functional collaborator (DS, SMEs, and external teams). Proven exceptional debugging and problem-solving skills Proven solid individual contributor with a team-first approach At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #njp External Candidate Application Internal Employee Application

Posted 2 weeks ago

Apply

3.0 - 5.0 years

7 - 11 Lacs

Gurugram

Work from Office

Job Summary Synechron is seeking a detail-oriented Data Analyst to leverage advanced data analysis, visualization, and insights to support our business objectives. The ideal candidate will have a strong background in creating interactive dashboards, performing complex data manipulations using SQL and Python, and automating workflows to drive efficiency. Familiarity with cloud platforms such as AWS is a plus, enabling optimization of data storage and processing solutions. This role will enable data-driven decision-making across teams, contributing to strategic growth and operational excellence. Software Requirements Required: PowerBI (or equivalent visualization tools like Streamlit, Dash) SQL (for data extraction, manipulation, and querying) Python (for scripting, automation, and advanced analysis) Data management tools compatible with cloud platforms (e.g., AWS S3, Redshift, or similar) Preferred: Cloud platform familiarity, especially AWS services related to data storage and processing Knowledge of other visualization platforms (Tableau, Looker) Familiarity with source control systems (e.g., Git) Overall Responsibilities Develop, redesign, and maintain interactive dashboards and visualization tools to provide actionable insights. Perform complex data analysis, transformations, and validation using SQL and Python. Automate data workflows, reporting, and visualizations to streamline processes. Collaborate with business teams to understand data needs and translate them into effective visual and analytical solutions. Support data extraction, cleaning, and validation from various sources, ensuring data accuracy. Maintain and enhance understanding of cloud environments, especially AWS, to optimize data storage, processing pipelines, and scalability. Document technical procedures and contribute to best practices for data management and reporting. Performance Outcomes: Timely, accurate, and insightful dashboards and reports. Increased automation reducing manual effort. Clear communication of insights and data-driven recommendations to stakeholders. Technical Skills (By Category) Programming Languages: Essential: SQL, Python Preferred: R, additional scripting languages Databases/Data Management: Essential: Relational databases (SQL Server, MySQL, Oracle) Preferred: NoSQL databases like MongoDB, cloud data warehouses (AWS Redshift, Snowflake) Cloud Technologies: Essential: Basic understanding of AWS cloud services (S3, EC2, RDS) Preferred: Experience with cloud-native data solutions and deployment Frameworks and Libraries: Python: Pandas, NumPy, Matplotlib, Seaborn, Plotly, Streamlit, Dash Visualization: PowerBI, Tableau (preferred) Development Tools and Methodologies: Version control: Git Automation tools for workflows and reporting Familiarity with Agile methodologies Security Protocols: Awareness of data security best practices and compliance standards in cloud environments Experience Requirements 3-5 years of experience in data analysis, visualization, or related data roles. Proven ability to deliver insightful dashboards, reports, and analysis. Experience working across teams and communicating complex insights clearly. Knowledge of cloud environments like AWS or other cloud providers is desirable. Experience in a business environment, not necessarily as a full-time developer, but as an analytical influencer. Day-to-Day Activities Collaborate with stakeholders to gather requirements and define data visualization strategies. Design and maintain dashboards using PowerBI, Streamlit, Dash, or similar tools. Extract, transform, and analyze data using SQL and Python scripts. Automate recurring workflows and report generation to improve operational efficiencies. Troubleshoot data issues and derive insights to support decision-making. Monitor and optimize cloud data storage and processing pipelines. Present findings to business units, translating technical outputs into actionable recommendations. Qualifications Bachelors degree in Computer Science, Data Science, Statistics, or related field. Masters degree is a plus. Relevant certifications (e.g., PowerBI, AWS Data Analytics) are advantageous. Demonstrated experience with data visualization and scripting tools. Continuous learning mindset to stay updated on new data analysis trends and cloud innovations. Professional Competencies Strong analytical and problem-solving skills. Effective communication, with the ability to explain complex insights clearly. Collaborative team player with stakeholder management skills. Adaptability to rapidly changing data or project environments. Innovative mindset to suggest and implement data-driven solutions. Organized, self-motivated, and capable of managing multiple priorities efficiently. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Job Area: Miscellaneous Group, Miscellaneous Group > Data Analyst Qualcomm Overview: Qualcomm is a company of inventors that unlocked 5G ushering in an age of rapid acceleration in connectivity and new possibilities that will transform industries, create jobs, and enrich lives. But this is just the beginning. It takes inventive minds with diverse skills, backgrounds, and cultures to transform 5Gs potential into world-changing technologies and products. This is the Invention Age - and this is where you come in. General Summary: About the Team Qualcomm's People Analytics team plays a crucial role in transforming data into strategic workforce insights that drive HR and business decisions. As part of this lean but high-impact team, you will have the opportunity to analyze workforce trends, ensure data accuracy, and collaborate with key stakeholders to enhance our data ecosystem. This role is ideal for a generalist who thrives in a fast-paced, evolving environment"”someone who can independently conduct data analyses, communicate insights effectively, and work cross-functionally to enhance our People Analytics infrastructure. Why Join Us End-to-End Impact Work on the full analytics cycle"”from data extraction to insight generation"”driving meaningful HR and business decisions. Collaboration at Scale Partner with HR leaders, IT, and other analysts to ensure seamless data integration and analytics excellence. Data-Driven Culture Be a key player in refining our data lake, ensuring data integrity, and influencing data governance efforts. Professional Growth Gain exposure to multiple areas of people analytics, including analytics, storytelling, and stakeholder engagement. Key Responsibilities People Analytics & Insights Analyze HR and workforce data to identify trends, generate insights, and provide recommendations to business and HR leaders. Develop thoughtful insights to support ongoing HR and business decision-making. Present findings in a clear and compelling way to stakeholders at various levels, including senior leadership. Data Quality & Governance Ensure accuracy, consistency, and completeness of data when pulling from the data lake and other sources. Identify and troubleshoot data inconsistencies, collaborating with IT and other teams to resolve issues. Document and maintain data definitions, sources, and reporting standards to drive consistency across analytics initiatives. Collaboration & Stakeholder Management Work closely with other analysts on the team to align methodologies, share best practices, and enhance analytical capabilities. Act as a bridge between People Analytics, HR, and IT teams to define and communicate data requirements. Partner with IT and data engineering teams to improve data infrastructure and expand available datasets. Qualifications Required4-7 years experience in a People Analytics focused role Analytical & Technical Skills Strong ability to analyze, interpret, and visualize HR and workforce data to drive insights. Experience working with large datasets and ensuring data integrity. Proficiency in Excel and at least one data visualization tool (e.g., Tableau, Power BI). Communication & Stakeholder Management Ability to communicate data insights effectively to both technical and non-technical audiences. Strong documentation skills to define and communicate data requirements clearly. Experience collaborating with cross-functional teams, including HR, IT, and business stakeholders. Preferred: Technical Proficiency Experience with SQL, Python, or R for data manipulation and analysis. Familiarity with HR systems (e.g., Workday) and cloud-based data platforms. People Analytics Expertise Prior experience in HR analytics, workforce planning, or related fields. Understanding of key HR metrics and workforce trends (e.g., turnover, engagement, diversity analytics). Additional Information This is an office-based position (4 days a week onsite) with possible locations that may include India and Mexico Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail myhr.support@qualcomm.com or call Qualcomm's toll-free number found here . Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 2 weeks ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi

Work from Office

Job Title - Data Engineer Sr.Analyst ACS Song Management Level:Level 10- Sr. Analyst Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink)Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAS Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring the successful delivery of high-quality software solutions. Roles & Responsibilities:Experience in SAS deployments for SAS Grid multimachine environment, SAS Multimachine clustered installation.Experience in configuration of EMI- framework for SAS Environment extended monitoring.Experience in creation of metadata connections to third party databases like MS SQL, Teradata, Redshift, Snowflake, SQL Anywhere ODBC connections etc.Strong experience of SAS processes to plan file generation, depot maintenance, Hotfix installation, license renewal.Hands-on experience using DI/BI tools like SAS Data Integration Studio, SAS Enterprise Guide, SAS Forecast Studio would be extra advantage.Proficient in SAS admin activities like creating access to SAS, managing SAS groups.Experience of setting up SAS Security model using ACTs.Experience in maintaining the security bridge in Unix server through ACLs settings at the folder level as well as user level.Experience in developing Shell Scripts to check system resources for sending an email notification.Experience in administration and maintenance of SAS Grid Environments and environments configured based on Levs.Experience in adding users, groups, Authentication domains, setting up ACTs, scheduling jobs, taking backups, configuring base sas and third-party libraries, registering tables from SAS Management Console.Installation of SAS client applications and troubleshooting technical problems.Tracking applicable SAS hot fixes and creating implementation plans for respective SAS modules and applications.Monitoring SAS Server resources and reporting usage(s). Professional & Technical Skills: Identifying performance and recommending/implementing tuning of active SAS environment(s) providing projected capacity shortfalls.Designing, implementing, and maintaining security on SAS Metadata and Linux/Unix for users.Designing, implementing, and executing the change control and promotions tasks in SAS.Provide guidance and assistance to SAS Developers on operational and technical issues, interfacing with the SAS Institute for Support on administrative and system issues.Monitor and log SAS servers and optimize memory usage or tune servers for optimal performance.Focus on areas of capability, interoperability, scalability and enterprise class issues (i.e., like failover).Develop applications using SAS Base & Macros.Test and debug applications to ensure that they meet quality standards.Provide technical guidance and support to junior team members.Contribute to team discussions and actively participate in providing solutions to work-related problems.Strong communication skills to present technical information to business stakeholders.Experienced in troubleshooting, documentation & backtracking critical path flow processes.Proven experience in administering both production and lower environments (development, testing, and staging) for SAS Grid environments. This includes managing configurations, deployments, and ensuring seamless integration and operation across different environments.Demonstrated ability to handle operational tasks and maintenance activities for SAS Grid environments, including performance tuning, troubleshooting issues, and implementing updates and patches in both production and lower environments. The role requires ensuring high availability and reliability across all environments Additional Information:The candidate should have a minimum of 5 years of experience in SAS Administration.15 years of education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Talend ETL Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Support Engineer, you will be responsible for identifying and solving issues within multiple components of critical business systems. Your typical day will involve providing support for SAP ABAP Development and ensuring smooth functioning of the system. You will engage with various stakeholders to understand their needs and provide timely solutions, ensuring that all systems operate efficiently and effectively. Your role will also require you to monitor system performance, troubleshoot issues, and implement necessary updates to maintain optimal functionality. Collaboration with team members and other departments will be essential to ensure that all business processes are supported seamlessly. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of processes and procedures to enhance team knowledge.- Provide training and support to junior team members to foster their development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Strong understanding of data integration and transformation processes.- Experience with troubleshooting and resolving application issues.- Familiarity with database management and SQL.- Ability to work collaboratively in a team environment.-Should have knowledge in Java programming language, Oracle, SQL server and MySQL. Additional Information:- The candidate should have minimum 3 years of experience in Talend ETL.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

6.0 - 11.0 years

35 - 50 Lacs

Bengaluru

Remote

This is an URGENT requirement. We are hiring for a UK based Fintech company (name is kept confidential). The company is seeking a talented Senior Analytics Engineer to join the team and help build analytics pipelines working closely with the senior stakeholders. As an Analytics Engineer specializing in the payments space, youll be at the forefront of analysing payment transaction data, uncovering trends, and optimising card issuance operations. Your work will directly shape strategic initiatives and improve business outcomes. Please note that advanced experience in Data Build Tool (DBT) is a MUST for this role. You should NOT apply for this role if you don't have experience with DBT . Key Responsibilities Analyze large datasets related to payment processing and customer transactions to uncover trends and actionable insights. Develop dashboards and reports to track KPIs and support decision-making. Work with stakeholders to understand data needs and provide insights through presentations and reports. Deliver data-driven recommendations to support business objectives. Build and optimize data pipelines using dbt, ensuring clean and accessible data. Monitor data quality and implement validation processes in collaboration with data engineers. Create scalable data models in Snowflake using dbt and identify opportunities for efficiency gains. Optimize workflows and monitor system performance for continuous improvements. Ensure data practices meet regulatory standards and assist in compliance reporting. Stay updated on industry trends and contribute to process enhancements. Qualifications Bachelors degree in Data Science, Computer Science, Information Systems, Finance, or a related field. Proven experience as a Data Analyst/Analytics Engineer role, preferably in the payments industry with issuer processors. Proven experience in SQL, DBT and Snowflake. Proficiency in building and managing data transformations with dbt, with experience in optimizing complex transformations and documentation. Hands-on experience with Snowflake as a primary data warehouse, including knowledge of performance optimization, data modeling, and query tuning. Strong proficiency in data analysis tools and languages (e.g., SQL, Python). Strong understanding of data modeling principles and experience applying modeling techniques. Proficiency with data visualization tools such as Tableau, Power BI, or similar. Knowledge of payment processing system, card issuance, and related services. Experience with cloud-based data solutions (e.g., AWS, Azure, Google Cloud). Familiarity with modern data architecture such as data Lakehouse. Strong analytical, problem-solving, and communication skills. Attention to detail and a commitment to data quality and integrity. Familiarity with regulatory requirements and security standards in the financial industry

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Glue Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : Btech Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will also engage in problem-solving activities and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue.- Good To Have Skills: Experience with data integration tools.- Strong understanding of cloud computing concepts and services.- Experience in application development using various programming languages.- Familiarity with database management and data warehousing solutions. Additional Information:- The candidate should have minimum 5 years of experience in AWS Glue.- This position is based at our Bengaluru office.- A Btech is required. Qualification Btech

Posted 2 weeks ago

Apply

6.0 - 11.0 years

9 - 14 Lacs

Noida

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using AWS, Python & SQL. * Optimize performance with Apache Spark & Amazon Redshift. * Collaborate on cloud architecture with cross-functional teams. Redshift

Posted 2 weeks ago

Apply

2.0 - 4.0 years

7 - 11 Lacs

Hyderabad

Remote

We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).

Posted 2 weeks ago

Apply

2.0 - 4.0 years

7 - 11 Lacs

Mumbai

Remote

We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).

Posted 2 weeks ago

Apply

2.0 - 4.0 years

7 - 11 Lacs

Kolkata

Remote

We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).

Posted 2 weeks ago

Apply

2.0 - 4.0 years

7 - 11 Lacs

Bengaluru

Remote

We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).

Posted 2 weeks ago

Apply

2.0 - 5.0 years

30 - 32 Lacs

Bengaluru

Work from Office

Data Engineer -2 (Experience 2-5 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotaks Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotaks data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.

Posted 2 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Talend ETL Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application requirements are met, overseeing the development process, and providing guidance to team members. You will also engage in problem-solving activities, ensuring that the applications are aligned with business objectives and user needs, while maintaining a focus on quality and efficiency throughout the project lifecycle. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of milestones. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Strong understanding of data integration processes and methodologies.- Experience with data warehousing concepts and practices.- Familiarity with SQL and database management systems.- Ability to troubleshoot and resolve technical issues efficiently. Additional Information:- The candidate should have minimum 5 years of experience in Talend ETL.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Hyderabad, Pune, Telangana

Work from Office

We have Immediate Openings on Big Data for Contract to Hire role for multiple clients. Job Details Skills Big Data Job type Contract to HIRE Primary Skills 6-8yrs of Experience in working as bigdata developer/supporting environemnts Strong knowledge in Unix/BigData Scripting Strong understanding of BigData (CDP/Hive) Environment Hands-on with GitHub and CI-CD implementations. Attitude to learn / understand ever task doing with reason Ability to work independently on specialized assignments within the context of project deliverable Take ownership of providing solutions and tools that iteratively increase engineering efficiencies. Excellent communication skills & team player Good to have hadoop, Control-M Tooling knowledge. Good to have Automation experience, knowledge of any Monitoring Tools. Role You will work with team handling application developed using Hadoop/CDP, Hive. You will work within the Data Engineering team and with the Lead Hadoop Data Engineer and Product Owner. You are expected to support existing application as well as design and build new Data Pipelines. You are expected to support Evergreening or upgrade activities of CDP/SAS/Hive You are expected to participate in the service management if application Support issue resolution and improve processing performance /avoid issue reoccurring Ensure the use of Hive, Unix Scripting, Control-M reduces lead time to delivery Support application in UK shift as well as on-call support over night/weekend This is mandatory Working Hours UK Shift - One week per Month On Call - One week per Month.

Posted 2 weeks ago

Apply

1.0 - 7.0 years

3 - 9 Lacs

Bengaluru

Work from Office

Design, develop, and implement machine learning models and statistical algorithms.Analyze large datasets to extract meaningful insights and trends.Collaborate with stakeholders to define business problems and deliver data-driven solutions.Optimize and scale machine learning models for production environments.Present analytical findings and recommendations in a clear, actionable manner.Key Skills:Proficiency in Python, R, and SQL.Experience with ML libraries like TensorFlow, PyTorch, or Scikit-learn.Strong knowledge of statistical methods and data visualization tools.Excellent problem-solving and storytelling skills

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies