Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
4 - 7 Lacs
bengaluru
Work from Office
Design, develop, and maintain automated test scripts and frameworks to support functional, integration, and regression testing of software applications. Collaborate with software developers to understand system requirements and design test cases that effectively validate software functionality and performance. Execute automated test suites and analyze test results to identify defects, track issues, and ensure timely resolution. Participate in code reviews and contribute to the development of high-quality, reliable code that meets testing standards and best practices. Investigate and troubleshoot issues reported by customers or internal stakeholders, working closely with cross-functional teams to diagnose and resolve problems. Contribute to the development and maintenance of continuous integration/continuous deployment (CI/CD) pipelines to automate build, test, and deployment processes. Stay up-to-date on emerging trends and best practices in software testing and quality assurance, and propose improvements to testing methodologies and tools.
Posted 3 weeks ago
2.0 - 5.0 years
4 - 7 Lacs
mumbai
Work from Office
Key ResponsibilitiesUnderstand business requirements by engaging with business teams Data extraction from valuable data sources & automating data collection process Data processing, cleaning and validating integrity of data to be used for analysis Exploratory data analysis to identify trends and patterns in large amount of data Build machine learning based models using algorithms and statistical techniques like Regression, Decision trees, Boosting etc Present insights using data visualization techniques Propose solutions and strategies to various complex business challengesBuild GenAI models using RAG frameworks for chatbots, summarisation etc Develop model deployement pipline using Lambda, ECS etcSkills & AttributesKnowledge of statistical programming languages like R, Python and database query languages like SQL and statistical tests like distributions, regression etc Experience in data visualization tools like tableau, QlikSense etc Ability to write comprehensive reports, with an analytical mind and inclination for problem-solving Exposure in advanced techniques like GenAI, neural networks, NLP, image and speech processing Ability to engage with stakeholders to understand business requirements and convert the same into technical problems for solution development and deployment
Posted 3 weeks ago
8.0 - 13.0 years
25 - 40 Lacs
bengaluru
Work from Office
Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills
Posted 3 weeks ago
2.0 - 5.0 years
4 - 7 Lacs
bengaluru
Work from Office
Education Qualification : Engineer - B.E / B.Tech / MCA Skills : Tertiary -> Technology | Data Analytics Activities | Data Analysis | 3 - Experienced Tertiary -> Technology | BI, DWH, ETL Roles | DWH Architect | 3 - Experienced Secondary -> Technology | Data Analytics Activities | Data Processing | 3 - Experienced Primary -> Technology | Data Analytics Activities | Data Integration | 3 - Experienced Secondary -> Technology | Big Data Tools / Systems | Streams | 3 - Experienced Tertiary -> Functional | Pre Sales Support Activities | Responding to RFPs | 3 - Experienced Primary -> Technology | Data Analytics Activities | Data Mining | 3 - Experienced Certification : Technology | IT Certifications | Microsoft Certification | Perform Data Engineering on Microsoft HD Insight Details: The Professional will be responsible to analyse methods to improve data reliability and quality. They will be responsible to combine raw information from different sources to create consistent and machine-readable formats. They also will need to develop and test architectures that enable data extraction and transformation for predictive or prescriptive modeling. 1. Analyze and organize raw data 2. Build data systems and pipelines 3. Evaluate business needs and objectives 4. Interpret trends and patterns 5. Conduct complex data analysis and report on results 6. Prepare data for prescriptive and predictive modeling 7. Build algorithms and prototypes 8. Combine raw information from different sources 9. Explore ways to enhance data quality and reliability 10. Identify opportunities for data acquisition 11. Develop analytical tools and programs 12. Collaborate with data scientists and architects on several projects
Posted 3 weeks ago
2.0 - 6.0 years
4 - 8 Lacs
kolkata, mumbai, new delhi
Work from Office
Job Requirements: Develops automation that supports two-factor authentication and secure data processing Integrate with 3rd party systems to support bi-directional data transactions Analyzes RPA performance and recommends improvements Monitors data quality and cleans data as needed Documents RPA package functionality and provides end user training Provides support and troubleshooting of technical issues Leads the gathering of business requirements and prototyping business solutions Qualifications: Bachelor s Degree in Information Technology, or related area Must have experience building and deploying three or more RPA bots Minimum of three years experience with software development Must have experience with distributed team development and excellent collaborative skills Strong written & verbal communication skills If you ve got the skills to succeed and the motivation to make it happen, we look forward to hearing from you.
Posted 3 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
bengaluru
Work from Office
Education Qualification : Engineer - B.E / B.Tech / MCA Skills : Secondary -> Technology | Big Data Tools / Systems | Streams | 3 - Experienced Primary -> Technology | Data Analytics Activities | Data Integration | 3 - Experienced Secondary -> Technology | Data Analytics Activities | Data Processing | 3 - Experienced Primary -> Technology | Data Analytics Activities | Data Mining | 3 - Experienced Tertiary -> Functional | Pre Sales Support Activities | Responding to RFPs | 3 - Experienced Tertiary -> Technology | Data Analytics Activities | Data Analysis | 3 - Experienced Tertiary -> Technology | BI, DWH, ETL Roles | DWH Architect | 3 - Experienced Certification : Technology | IT Certifications | Microsoft Certification | Perform Data Engineering on Microsoft HD Insight Details: The Professional will be responsible to analyse methods to improve data reliability and quality. They will be responsible to combine raw information from different sources to create consistent and machine-readable formats. They also will need to develop and test architectures that enable data extraction and transformation for predictive or prescriptive modeling. 1. Analyze and organize raw data 2. Build data systems and pipelines 3. Evaluate business needs and objectives 4. Interpret trends and patterns 5. Conduct complex data analysis and report on results 6. Prepare data for prescriptive and predictive modeling 7. Build algorithms and prototypes 8. Combine raw information from different sources 9. Explore ways to enhance data quality and reliability 10. Identify opportunities for data acquisition 11. Develop analytical tools and programs 12. Collaborate with data scientists and architects on several projects
Posted 3 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
bengaluru
Work from Office
Job Summary As a Software Engineer III at JPMorgan Chase within the Corporate and Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm s business objectives. Job Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics. Frequently utilizes SQL and understands NoSQL databases and their niche in the marketplace Collaborate closely with cross-functional teams to develop efficient data pipelines to support various data-driven initiatives Implement best practices for data engineering, ensuring data quality, reliability, and performance Contribute to data modernization efforts by leveraging cloud solutions and optimizing data processing workflows Perform data extraction and implement complex data transformation logic to meet business requirements Leverage advanced analytical skills to improve data pipelines and ensure data delivery is consistent across projects Monitor and executes data quality checks to proactively identify and address anomalies Ensure data availability and accuracy for analytical purposes. Identify opportunities for process automation within data engineering workflows Communicate technical concepts to both technical and non-technical stakeholders. Deploy and manage containerized applications using Kubernetes (EKS) and Amazon ECS. Implement data orchestration and workflow automation using AWS step , Event Bridge. Use Terraform for infrastructure provisioning and management, ensuring a robust and scalable data infrastructure. Required qualifications, capabilities, and skills Formal training or certification on Data Engineering concepts and 3+ years applied experience Experience across the data lifecycle. Advanced at SQL (e.g., joins and aggregations) Advanced knowledge of RDBMS like Aurora. Experience in Microservice based component using ECS or EKS Working understanding of NoSQL databases 4 + years of Data Engineering experience in building and optimizing data pipelines, architectures, and data sets ( Glue or Databricks etl) Proficiency in object-oriented and object function scripting languages (Python etc.) Experience in developing ETL process and workflows for streaming data from heterogeneous data sources Willingness and ability to learn and pick up new skillsets Experience working with modern DataLakes Databricks ). Experience building Pipeline on AWS using Terraform and using CI/CD piplelines Preferred qualifications, capabilities, and skills Experience with data pipeline and workflow management tools (Airflow, etc.) Strong analytical and problem-solving skills, with attention to detail. Ability to work independently and collaboratively in a team environment. Good communication skills, with the ability to convey technical concepts to non-technical stakeholders. A proactive approach to learning and adapting to new technologies and methodologies.
Posted 3 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
bengaluru
Work from Office
GCP Data Engineer We are seeking a skilled and forward-thinking Data Engineer to join our Emerging Tech team. Company : Aqilea India Employment Type: Full Time Location: Bangalore(Hybrid) Experience: 4 to 6 years About the Role: We are seeking a skilled Data Engineer with strong expertise in building and optimizing data pipelines, managing large-scale datasets, and enabling data-driven decision-making. The ideal candidate should have solid experience in SQL, Python, and cloud-based data platforms, with a focus on Google Cloud Platform (GCP). Design, build, and maintain scalable and efficient data pipelines. Develop and optimize data models in GCP BigQuery. Work with Apache Spark / PySpark for large-scale data processing. Implement and manage data workflows using Airflow and dbt. Collaborate with analysts, data scientists, and business stakeholders to deliver high-quality data solutions. Ensure data quality, reliability, and compliance across platforms. Monitor and optimize performance of data pipelines and queries. Required Skills & Experience: Strong proficiency in SQL and Python. Hands-on experience with GCP BigQuery for data warehousing. Expertise in Apache Spark / PySpark for data transformations. Experience with dbt for data modeling and transformation. Knowledge of Apache Airflow for orchestration and scheduling. Solid understanding of data engineering best practices, performance optimization, and data governance. Experience working in agile, collaborative environments. Notice Period: Immediate to 15 Days Only Work Location : Bangalore(Hybrid)
Posted 3 weeks ago
4.0 - 7.0 years
6 - 9 Lacs
kolkata
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Responsibilities Handson experience in Azure Databricks, ADF, or Synapse Analytics Proficiency in Python for data processing and scripting. Strong command over SQL writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Understanding CI/CD practices in a data engineering context. Excellent problemsolving and communication skills. Mandatory skill sets ADF, SQL & Python Preferred skill sets Experienced in Delta Lake, Power BI, or Azure DevOps. Knowledge of Spark, Scala, or other distributed processing frameworks. Exposure to BI tools like Power BI, Tableau, or Looker. Familiarity with data security and compliance in the cloud. Experience in leading a development team. Years of experience required 4 7 yrs Education qualification Btech/MBA/MCA Education Degrees/Field of Study required MBA (Master of Business Administration), Bachelor of Technology Degrees/Field of Study preferred Required Skills ADF Business Components Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Travel Requirements Available for Work Visa Sponsorship
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
bengaluru
Work from Office
Education Qualification : Engineer - B.E / B.Tech / MCA Skills : Tertiary -> Functional | Pre Sales Support Activities | Responding to RFPs | 4 - Advanced Primary -> Technology | Data Analytics Activities | Data Mining | 4 - Advanced Secondary -> Technology | Data Analytics Activities | Data Processing | 4 - Advanced Tertiary -> Technology | BI, DWH, ETL Roles | DWH Architect | 4 - Advanced Tertiary -> Technology | Data Analytics Activities | Data Analysis | 3 - Experienced Secondary -> Technology | Big Data Tools / Systems | Streams | 4 - Advanced Primary -> Technology | Data Analytics Activities | Data Integration | 4 - Advanced Certification : Technology | IT Certifications | Microsoft Certification | Perform Data Engineering on Microsoft HD Insight Details: The Professional will be responsible to analyse methods to improve data reliability and quality. They will be responsible to combine raw information from different sources to create consistent and machine-readable formats. They also will need to develop and test architectures that enable data extraction and transformation for predictive or prescriptive modeling. 1. Analyze and organize raw data 2. Build data systems and pipelines 3. Evaluate business needs and objectives 4. Interpret trends and patterns 5. Conduct complex data analysis and report on results 6. Prepare data for prescriptive and predictive modeling 7. Build algorithms and prototypes 8. Combine raw information from different sources 9. Explore ways to enhance data quality and reliability 10. Identify opportunities for data acquisition 11. Develop analytical tools and programs 12. Collaborate with data scientists and architects on several projects
Posted 3 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
bengaluru
Work from Office
Location: Bengaluru Designation: Consultant Y our potential, unleashed. The Team Deloitte s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. About the Role: We are seeking experienced and business-focused professionals to join our Customer Strategy & Design team . This role involves leveraging advanced analytics, statistical modeling, machine learning, and data-driven engineering practices to address complex business challenges, generate actionable insights, and drive measurable impact for our clients. Key Responsibilities: Design, develop, and deploy scalable analytical and machine learning solutions to solve real-world business problems Conduct exploratory data analysis to uncover trends, patterns, and opportunities Build predictive and prescriptive models, including classification, regression, clustering, recommendation, and forecasting Develop and maintain robust data processing pipelines for analytical readiness Translate business needs into analytical frameworks and present findings in a clear, compelling manner for non-technical stakeholders Collaborate with cross-functional teams, including analytics specialists, data engineers, business analysts, and product owners Required Skills & Qualifications: 4+ years of hands-on experience in advanced analytics, machine learning, and/or large-scale data processing Strong programming skills in Python/R with experience using ML libraries (e.g., Scikit-learn, XGBoost, TensorFlow, PyTorch) Proficiency in SQL, data wrangling, and working with large, complex datasets Strong statistical grounding, including A/B testing and hypothesis testing Cloud platform expertise (GCP preferred; AWS/Azure acceptable) Excellent communication, storytelling, and stakeholder engagement abilities Experience in consulting or working with global client teams is a strong advantage Solid academic background in quantitative disciplines (Mathematics, Statistics, Computer Science, Engineering) Additional Notes: Initial assignment will require working in PST hours from the Bellandur office Future projects may align with standard Indian business hours How you ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report . Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone s welcome entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here s a glimpse of things that are in store for you.
Posted 3 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
kolkata, mumbai, new delhi
Work from Office
Building off our Cloud momentum, Oracle has formed a new organization - Health Data Intelligence. This team will focus on product development and product strategy for Oracle Health, while building out a complete platform supporting modernized, automated healthcare. This is a net new line of business, constructed with an entrepreneurial spirit that promotes an energetic and creative environment. We are unencumbered and will need your contribution to make it a world class engineering center with the focus on excellence. Oracle Health Data Analytics has a rare opportunity to play a critical role in how Oracle Health products impact and disrupt the healthcare industry by transforming how healthcare and technology intersect. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. Define specifications for significant new projects and specify, design and develop software according to those specifications. You will perform professional software development tasks associated with the developing, designing and debugging of software applications or operating systems. Design and build distributed, scalable, and fault-tolerant software systems. Build cloud services on top of the modern OCI infrastructure. Participate in the entire software lifecycle, from design to development, to quality assurance, and to production. Invest in the best engineering and operational practices upfront to ensure our software quality bar is high. Optimize data processing pipelines for orders of magnitude higher throughput and faster latencies. Leverage a plethora of internal tooling at HDI to develop, build, deploy, and troubleshoot software. Qualifications 4+ years of experience in the software industry working on design, development and delivery of highly scalable products and services. Understanding of the entire product development lifecycle that includes understanding and refining the technical specifications, HLD and LLD of world-class products and services, refining the architecture by providing feedback and suggestions, developing, and reviewing code, driving DevOps, managing releases and operations. Strong knowledge of Java or JVM based languages. Experience with multi-threading and parallel processing. Strong knowledge of big data technologies like Spark, Hadoop Map Reduce, Crunch, etc. Past experience of building scalable, performant, and secure services/modules. Understanding of Micro Services architecture and API design Experience with Container platforms Good understanding of testing methodologies. Experience with CI/CD technologies. Experience with observability tools like Spunk, New Relic, etc Good understanding of versioning tools like Git/SVN.
Posted 3 weeks ago
5.0 - 7.0 years
20 - 25 Lacs
bengaluru
Work from Office
Position Purpose The Senior Spark Developer will be responsible for integrating and maintaining the Quantexa platform, a spark based software provided by a UK fintech, into our existing systems to enhance our anti-money laundering capabilities. This role requires a deep expertise in Spark development, as well as an ability to analyze and understand underlying data. Additionally, the candidate should have an interest in exploring open-source applications distributed by Apache, Kubernetes, OpenSearch and Oracle. Responsibilities Direct Responsibilities Integrate and upgrade the Quantexa tool with our existing systems for enhanced anti-money laundering measures. Develop and maintain Spark-based applications deployed on Kubernetes clusters. Conduct data analysis to understand and interpret underlying data structures. Collaborate with cross-functional teams to ensure seamless integration and functionality. Stay updated with the latest trends and best practices in Spark development and Kubernetes. Contributing Responsibilities Work on tasks assigned by Lead independently. Take ownership of tasks assigned. Adhering to the given timelines by Lead with first time right Develop programs in optimal manner without compromising core architecture/design Work on the technologies efficiently and produce best fit deliveries Be proactive and provide status to the team lead. Adheres to all compliance & project related processes Gain functional knowledge on applications worked upon Upgrade Technical skill by self-learning/taking Technical & Behavioral Competencies Key Responsibilities Integrate and upgrade the Quantexa tool with our existing systems for enhanced anti-money laundering measures. Develop and maintain Spark-based applications deployed on Kubernetes clusters. Conduct data analysis to understand and interpret underlying data structures. Collaborate with cross-functional teams to ensure seamless integration and functionality. Stay updated with the latest trends and best practices in Spark development and Kubernetes. Required Qualifications 5+ years of experience in development Extensive experience in Hadoop, Spark, Scala development (5 years min). Strong analytical skills and experience in data analysis (SQL), data processing (such as ETL), parsing, data mapping and handling real-life data quality issues. Excellent problem-solving abilities and attention to detail. Strong communication and collaboration skills. Experience in Agile development High quality coding skill, incl. code control, unit testing, design, and documentation (code, test). Experience with tools such as sonar. Experience with GIT, Jenkins. Specific Qualifications (if required) Experience with development and deployment of spark application and deployment on Kubernetes clusters Hands-on development experience (Java, Scala, etc.) via system integration projects, Python, Elastic (optional). Other/Specific Qualifications (if required) Fluent in English Team player Strong analytical skills Quality oriented and well organized Willing to work under pressure and mission oriented Excellent Oral and Written Communication Skills, Motivational Skills, Results-Oriented
Posted 3 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
kolkata, mumbai, new delhi
Work from Office
Join Clario s industry-leading Digital Physiology team as a Principal Statistical Programmer, where your expertise will drive innovation in clinical research. This is a unique opportunity to lead advanced statistical programming for cardiac safety trials, shaping the future of digital health through cutting-edge data science and global collaboration. What we offer Competitive compensation + shift allowances Attractive benefits (security, flexibility, support and well-being) Engaging employee programs Technology for hybrid working and great onsite facilities What youll be doing Statistical Programming & Data Standards Lead and coordinate all statistical programming activities for cardiac safety trials. Develop, test, and maintain SAS code to generate CDISC-compliant datasets (SDTM, ADaM). Produce and maintain submission-ready datasets and electronic submission packages (e.g., define.xml, reviewer s guide) in accordance with FDA guidelines. Create and implement data standards and macros to support cardiac safety deliverables. Research and apply new programming techniques to enhance data processing and analysis. Generate and interpret compliance checks on CDISC-formatted datasets. Support statistical analyses and regulatory requirements through client-facing discussions. Team Leadership & Mentorship Manage direct reports in a line or matrix capacity, including work allocation, resource planning, and professional development. Conduct onboarding and training on statistical programming practices and SOPs. Mentor junior staff and provide guidance on programming methodologies and quality standards. Lead team strategy meetings and contribute to departmental planning. Process Improvement & Strategic Initiatives Identify and implement process improvements to enhance operational efficiency. Develop and maintain SOPs, SWIs, templates, and playbooks for programming deliverables. Drive initiatives for future analyses, data quality, and standardization. Participate in hiring and contribute to strategic planning for the statistical programming function. Cross-functional Collaboration & Project Management Collaborate with cross-functional teams to define scope and timelines for statistical deliverables. Manage client commitments and ensure timely delivery of assigned projects. Maintain accurate tracking of deliverable statuses and dates. Provide consultation during sponsor and regulatory teleconferences (FDA, EMEA, PMDA). What were looking for Ph.D. with 5+ years of relevant industry experience, M.S. with 7+ years of relevant industry experience or B.S. with 10+ years of relevant industry experience A degree in medical, health, public, or general science or an equivalent combination of education and experience sufficient to perform job duties Strong experience in clinical trials , preferably within a CRO or pharmaceutical research organization Proficiency in SAS programming , including creation, testing, and maintenance of CDISC-compliant datasets (SDTM, ADaM) Experience with electronic submission packages and regulatory interactions (e.g., FDA) Familiarity with clinical protocols and Statistical Analysis Plans Experience with TFL generation is a plus Solid understanding of the pharmaceutical drug development process Proficient in Windows and Microsoft Office products Demonstrated leadership experience and/or training Excellent verbal and written communication skills Strong organizational and analytical abilities Ability to work both independently and in a team setting , with flexibility to adapt to changing priorities
Posted 3 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
bengaluru
Work from Office
Job Description This role, part of the Digital Intelligence team, offers opportunity in the financial services industry by developing impactful Machine Learning solutions As a Applied AI ML Lead within the team at JPMorgan, you will collaborate with all lines of business and functions to deliver software solutions. You will have opportunity to research, experiment, develop, and productionize high-quality machine learning models, services, and platforms to make a significant business impact. You will also design and implement highly scalable and reliable data processing pipelines and perform analysis and insights to promote and optimize business results. Job responsibilities Design, deploy and manage prompt-based models on LLMs for various NLP tasks in the financial services domain Conduct research on prompt engineering techniques to improve the performance of prompt-based models within the financial services field, exploring and utilizing LLM orchestration and agentic AI libraries. Collaborate with cross-functional teams to identify requirements and develop solutions to meet business needs within the organization Communicate effectively with both technical and non-technical stakeholders Build and maintain data pipelines and data processing workflows for prompt engineering on LLMs utilizing cloud services for scalability and efficiency. Develop and maintain tools and framework for prompt-based model training, evaluation and optimization Analyze and interpret data to evaluate model performance to identify areas of improvement Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Experience with prompt design and implementation or chatbot application Strong programming skills in Python with experience in PyTorch or TensorFlow Experience building data pipelines for both structured and unstructured data processing. Experience in developing APIs and integrating NLP or LLM models into software applications Hands-on experience with cloud platforms (AWS or Azure) for AI/ML deployment and data processing. Excellent problem-solving and the ability to communicate ideas and results to stakeholders and leadership in a clear and concise manner Basic knowledge of deployment processes, including experience with GIT and version control systems Familiarity with LLM orchestration and agentic AI libraries Hands on experience with MLOps tools and practices, ensuring seamless integration of machine learning models into production environment Preferred qualifications, capabilities, and skills Familiarity with model fine-tuning techniques such as DPO and RLHF. Knowledge of Java, Spark Knowledge of financial products and services including trading, investment and risk management
Posted 3 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
bengaluru
Work from Office
Job Summary The Data Governance team is focused on ensuring data is managed properly, according to established policies and procedures, while delivering high quality data. Although the driver of the team is to ensure the organization gets value out of the data, the team focuses on how decisions are made about data, and how the enterprise interacts with data. The Senior Analyst, Data Governance Policy and Standards executes our objectives through partnership with the Data Governance leadership team in ensuring adherence to our Data Governance policies and standards. This role requires a deep understanding of the strategic vision of Data Governance, familiarity with business policies and standards, and data processing applications. Essential Job Functions Strategy Management - Partner with the strategy organization to execute upon our roadmap by assessing adherence to our Data Management policies and standards across the enterprise. Ensure alignment on scope and determine appropriate testing methodologies to uncover any gaps in policy and standards adherence. Partner closely with the organization on data changes and impacts to our business units. - (30%) Collaboration Execute great collaboration skills needed to partner with the teams or domains concerned, in delivering the procedures and in ensuring that the same is being audited time to time ensuring maximum adaptability. - (30%) Data Analysis - Establish the appropriate methodologies, engage internal parties, and facilitate the assessment. Upon completion of assessment, provide actionable insight if enterprise training and communication is needed, or modification of standards. Perform complex data analysis to identify data issues, data patterns and potential connections to other data assets (mainframe, cloud, etc.). - (40%) Minimum Qualifications Bachelor s Degree in Business, Computer Science, Engineering, or Information Systems; or equivalent, relevant work experience 5+ years of experience working in data governance or data management practices Experience writing policies and/or standards, or other enterprise materials Experience developing complex queries using SQL or other coding languages Experience with complex data profiling and analysis Preferred Qualifications Master s Degree Business, Computer Science, Engineering, or Information Systems Experience working in analytics within the data governance or data management practices, auditor, or examiner Experience working in data governance or data management practices Skills Data Governance Data Management Data Analysis Statistics Datasets Machine Learning Learning Tools Cloud Technology Apache Hadoop Platform Technologies Microsoft SQL Server Databricks Platform Reports To : Manager and above Direct Reports : 0 Work Environment Normal office environment. Some travel may be required. Hybrid mode, 6 days a month of working from office every month. Flexibility on changes to the same based on organization decision Travel Ability to travel up to 5% Other Duties This job description is illustrative of the types of duties typically performed by this job. It is not intended to be an exhaustive listing of each and every essential function of the job. Because job content may change from time to time, the Company reserves the right to add and/or delete essential functions from this job at any time. About Bread Financial At Bread Financial, you ll have the opportunity to grow your career, give back to your community, and be part of our award-winning culture. We ve been consistently recognized as a best place to work nationally and in many markets and we re proud to promote an environment where you feel appreciated, accepted, valued, and fulfilled both personally and professionally. Bread Financial supports the overall wellness of our associates with a diverse suite of benefits and offers boundless opportunities for career development and non-traditional career progression. Bread Financial (NYSE: BFH) is a tech-forward financial services company that provides simple, personalized payment, lending, and saving solutions to millions of U.S consumers. Our payment solutions, including Bread Financial general purpose credit cards and savings products, empower our customers and their passions for a better life. Additionally, we deliver growth for some of the most recognized brands in travel & entertainment, health & beauty, jewelry and specialty apparel through our private label and co-brand credit cards and pay-over-time products providing choice and value to our shared customers. To learn more about Bread Financial, our global associates and our sustainability commitments, visit breadfinancial.com or follow us on Instagram and LinkedIn . All job offers are contingent upon successful completion of credit and background checks. Bread Financial is an Equal Opportunity Employer. Job Family: Information Technology Job Type: Regular
Posted 3 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
gurugram
Work from Office
We are recruiting for a technically strong, people-centric Data Engineering Manager to lead a high-impact AI & Data engineering team based in India. This group will build the data platform and products to support the analytics, insights and data science needs for our core subscription business. Data has always been at the heart of the Economist Group, and ultimately we aim to help the business make data driven judgements based on real time, actionable insights How you will contribute in this role: Work with data product managers, analysts, and data scientists to build data and ML pipelines, data warehouse/data lake-house for analytics, reporting and prediction. Build pipelines and data lake house/warehouse with data feeds/APIs using Lakehouse, Snowflake, Apache Airflow, Amazon EMR on Snowflake and AWS data lake house. Work with data quality and observability tools, CI/CD frameworks to maintain data engineering excellence. Work on data modelling, lineage and data governance Manage and support a team of engineers through regular 1:1s, coaching, performance feedback, and career development. Foster an inclusive, growth-oriented team culture Contribute to engineering strategy and practices. Shape processes, team rituals, and tooling that enable your team to deliver high-quality code efficiently and safely Take full ownership of systems in your domain, ensuring observability, uptime, performance, documentation, and operational readiness Understand the productivity of your teams and the factors affecting productivity and intervening when required What we re looking for: Professional experience of designing, building data pipelines and managing data platforms (including ETL, ELT and lambda architectures), using technologies such as Apache Airflow, Amazon Athena, AWS Glue, Amazon EMR, or other equivalent. Expert level knowledge of both SQL and programming (Python, Scala, or Java) Experience with data technologies such as object storage (s3, hdfs), lakehouse (Iceberg or equivalent), data warehousing (Snowflake or equivalent), orchestration (Airflow or equivalent), data processing frameworks (Pyspark, pandas or equivalent) data modelling, lineage and governance tools. Professional experience in cloud platforms (AWS strongly preferred), such as serverless functions, API gateway, relational and NoSQL databases, and caching. Experience with IaC, CI/CD, containerization, and DevOps. Experience in working in teams with data scientists and ML engineers, for building ML pipelines for recommendation, customer lifetime value and propensity models. An advanced degree in software / data engineering, computer / information science, or a related quantitative field or equivalent work experience. Strong verbal and written communication skills and ability to work well with a wide range of stakeholders. Strong ownership, scrappy and biased for action. #LI-Hybrid AI usage for your application We are an innovative organisation that encourages the use of technology. We recognise that candidates may utilise AI tools to support with their job application process. However, it is essential that all information you provide truthfully and accurately reflects your own experience, skills, and qualifications. What we offer Our benefits package is designed to support your wellbeing, growth, and work-life balance. It includes a highly competitive pension or 401(k) plan, private health insurance, and 24/7 access to counselling and wellbeing resources through our Employee Assistance Program. We also offer a range of lifestyle benefits, including our Work From Anywhere program, which allows you to work from any location where you have the legal right to do so for up to 40 days per year. In addition, we provide generous annual and parental leave, as well as dedicated days off for volunteering and even for moving home. You will also be given free access to all The Economist content, including an online subscription, our range of apps, podcasts and more.
Posted 3 weeks ago
6.0 - 11.0 years
8 - 13 Lacs
bengaluru
Work from Office
This is a full-time on-site role for a Senior Data Engineer based in Bengaluru. The Senior Data Engineer will be responsible for designing, implementing, and maintaining data infrastructure. Daily tasks include developing data models, building and optimizing ETL pipelines, managing data warehouses, and providing data analytics solutions. The role requires collaboration with cross-functional teams to ensure high-quality data solutions. Qualifications Data Engineering, Data Modeling, and Data Warehousing skills Experience with Extract Transform Load (ETL) processes Proficiency in Data Analytics and data-driven decision-making Strong problem-solving and analytical skills Excellent communication and teamwork abilities Experience in Fintech or related industries is a plus Bachelors degree in Computer Science, Engineering, or related field Core Competencies, Knowledge and Experience: At least 6 years of experience as a Senior Data Engineer or similar role. At least 2 years of experience as team lead or team manager or similar role. Familiarity with data warehousing concepts and technologies (e.g., Delta Lake). Experience in building Big Data Architectures leveraging Spark, Delta Lake, Hadoop or similar Strong programming skills in Python for data manipulation and transformation. Proficiency in Apache Spark for distributed data processing Real Time. Advanced SQL skills for data querying and optimization. Experience with workflow management tools like Apache Airflow. Understanding of data security and privacy principles. Excellent problem-solving and analytical abilities. Strong communication and collaboration skills to work effectively in a cross-functional team. Ability to work in a fast-paced environment and manage multiple projects simultaneously. Continuous learning mindset to stay updated with the latest industry trends and technologies.
Posted 3 weeks ago
6.0 - 11.0 years
8 - 13 Lacs
kolkata, mumbai, new delhi
Work from Office
About Us Udacity is now an Accenture company, and exciting things are happening! We are on a mission of forging futures in tech through radical talent transformation in digital technologies. We offer a unique and immersive online learning platform, powering corporate technical training in fields such as Artificial Intelligence, Machine Learning, Data Science, Autonomous Systems, Cloud Computing and more. Our rapidly growing global organization is revolutionizing how the enterprise market bridges the talent shortage and skills gaps during their digital transformation journey. Data Driven is a Udacity core value. Udacity Data Engineering Team is looking for a Software Engineer to help us to design and develop Udacity s company-wide data solutions in support of that value. The ideal candidate will be comfortable talking to data analysts, data scientists and business stakeholders, understand the data that powers their products, generalize it and implement common fact tables across the company. This is a high-impact opportunity and you ll be a part of a global Data Team of highly talented Software Engineers, Data Analysts and Data Scientists based primarily in India and North America. The system you will be working on consists of a cloud-based data lake and is built completely on a cloud-native environment on AWS, using technologies such as Apache Spark, Airflow, Postgres and Redshift. About Udacity Data Engineering We prioritize the quality and trustworthiness of the data We embrace common and proven software engineering practices to achieve quality while maximizing the productivity We promote collective and shared ownership of our technical assets We value the flexibility needed in working in a global while balance and minimize non-work hour involvement in all locations How You Can Help Design and develop infrastructure and tools for the systems powering all of Udacity s data, analytics and reporting Work with analysts to generalize the data points behind their work to form multi-dimensional data stores Building out the lakehouse for analytics, machine learning and AI use cases with strong focus on accuracy and reliability using technologies such as Spark, Airflow, dbt and Iceberg on AWS Working with stakeholders from other departments and successfully translating their requirements to engineering solutions Being a champion and thought leader of effective agile software development practice and producing high quality readable code in such a setting What We Need From You BS or MS in Computer Science , MIS or related degrees 6+ years of experience working in the software industry, with at least 3 of which in data engineering Solid understanding and experience practicing Agile software development methodologies such as test driven development Good understanding in the principles of building robust data processing pipeline and track record of putting them in practice Proven ability to trace, identify and resolve issue in data, infrastructure and code Hands-on experience working with Apache Spark, cloud data storage and relational databases Hands-on experience working on cloud platforms, preferably AWS Proficiency in Python, Scala and SQL Experience working with generative AI is desired Experience working with US or Europe based remote teams is a strong plus Benefits: Experience a rewarding work environment with Udacitys perks and benefits! At Udacity, we offer you the flexibility of working from home. We also have in-person collaboration spaces in Mountain View, Cairo, Dubai and Noida and continue to build opportunities for team members to connect in person Flexible working hours Paid time off Comprehensive medical insurance coverage for you and your dependents Employee wellness resources and initiatives (access to wellness platforms like Headspace) Quarterly wellness day off Personalized career development Unlimited access to Udacity Nanodegrees Compensation at Udacity, an Accenture company, varies depending on a wide array of factors, which may include but are not limited to location, role, skill set, and level of experience. As required by local law, Udacity, an Accenture company, will provide a reasonable range of compensation. We believe that no one should be discriminated against because of their differences. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Our rich diversity makes us more innovative, more competitive, and more creative, which helps us better serve our clients and our communities. Accenture Equal Opportunity Statement Udacity, an Accenture company, is an EEO and Affirmative Action Employer of Veterans/Individuals with Disabilities, and is committed to providing veteran employment opportunities to our service men and women. Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States. Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process. Further, at Accenture a criminal conviction history is not an absolute bar to employment. Udacitys Values Obsess over Outcomes - Take the Lead - Embrace Curiosity - Celebrate the Assist Udacitys Terms of Use and Privacy Policy
Posted 3 weeks ago
7.0 - 12.0 years
9 - 14 Lacs
gurugram
Work from Office
Not Applicable Specialism Risk Management Level Director & Summary . Responsibilities Managed clients regarding compliance with data protection laws including GDPR, CCPA and other data privacy regulations guidelines with respect to data protection. Review commercial agreements and contracts, including Data Processing agreements with data processors. Experience in performing Privacy impact assessments, data discovery, data classification and developing data flow maps Experience in implementation and use of privacy enhancing technologies and design of data privacy framework. Define privacy safeguards based on elements of framework based on various data protection regulations Must have implemented and/or supported Data Protection technologies. Experience with development and implementation of data protection solutions such eDiscovery tools, Data Classification solutions, Data Leakage Prevention solutions to ensure privacy policies are correctly implemented. Work to align advanced data protection technologies and privacy by design principles to ensure data use meets privacy regulatory requirements Knowledge of data anonymization, psuedomization and encryption technical controls to develop systems that improves privacy protections Must have experience in Database protection and hands on knowledge in one or more of associated technologies Working knowledge of designing privacy enhancements for with a goal of developing technical solutions and systems to mitigate privacy risks Manage escalated data privacy queries from all parts of the business, bringing them to resolution by developing effective solutions. Develop communications strategy in line with company s strategy to engage with the key stakeholders Great communication skills and the ability to break down and explain complex data security problems Highly motivated to contribute and grow data privacy competency within the organization Total Experience of 7 years or more Prior Big 4 experience would be an added advantage Experience in Data Privacy for varied industry segments preferred Excellent communication skills both written and oral Certifications CIPP/CIPM/DCPP will be added advantage Mandatory skill sets Data Protection Preferred skill sets GDPR Years of experience required 14+ years Education Qualification BE/ BTech Post Graduates in any stream would be preferred (not mandatory) Education Degrees/Field of Study required Bachelor of Technology Degrees/Field of Study preferred Required Skills Data Protection Accepting Feedback, Accepting Feedback, Accounting and Financial Reporting Standards, Active Listening, Analytical Thinking, Artificial Intelligence (AI) Platform, Auditing, Auditing Methodologies, Business Process Improvement, Coaching and Feedback, Communication, Compliance Auditing, Corporate Governance, Creativity, Data Analysis and Interpretation, Data Ingestion, Data Modeling, Data Quality, Data Security, Data Transformation, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Financial Accounting {+ 36 more} Travel Requirements Available for Work Visa Sponsorship
Posted 3 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Python (Programming Language), Apache Spark, Google BigQueryMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, contributing to the overall success of the projects you are involved in. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Python (Programming Language), Apache Spark, Google BigQuery.- Strong understanding of data processing and transformation techniques.- Experience in developing scalable applications using distributed computing frameworks.- Familiarity with cloud platforms and services related to application deployment. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
7.0 - 12.0 years
10 - 14 Lacs
coimbatore
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process, collaborating with teams, and making key decisions to ensure project success. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application design and development process- Coordinate with stakeholders to gather requirements- Ensure timely delivery of high-quality applications Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark- Strong understanding of big data processing- Experience in building scalable applications- Knowledge of cloud computing platforms- Hands-on experience in data processing and analysis Additional Information:- The candidate should have a minimum of 7.5 years of experience in Apache Spark- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education
Posted 3 weeks ago
3.0 - 8.0 years
4 - 8 Lacs
bengaluru
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code for multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of enhancements and maintenance tasks, while also focusing on the development of new features to meet client needs. You will be responsible for delivering high-quality code and participating in discussions that drive project success, ensuring that all components function seamlessly within the overall application architecture. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct code reviews to ensure adherence to best practices and coding standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data processing and analytics workflows.- Experience with cloud-based data solutions and architectures.- Familiarity with programming languages such as Python or Scala.- Knowledge of data visualization tools and techniques. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
5 - 9 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Python (Programming Language), Apache Spark, Google BigQueryMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Spark, Python (Programming Language), Google BigQuery.- Strong understanding of data processing frameworks and distributed computing.- Experience in developing and deploying scalable applications.- Familiarity with cloud platforms and services. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
3.0 - 8.0 years
4 - 8 Lacs
chennai
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse, Core Banking Good to have skills : AWS BigDataMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the optimization of data processing workflows to enhance efficiency.- Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse, Core Banking.- Good To Have Skills: Experience with AWS BigData.- Strong understanding of data modeling and database design principles.- Experience with data integration tools and ETL processes.- Familiarity with data governance and data quality frameworks. Additional Information:- The candidate should have minimum 3 years of experience in Snowflake Data Warehouse.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |