Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0525-0430 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Senior Software Engineer Position: Senior Software Engineer- Node, AWS and Terraform Experience: 5-8 Years Category: Software Development/ Engineering Main location: Hyderabad/ Chennai/Bangalore Position ID: J0525-0430 Employment Type: Full Time Responsibilities: Design, develop, and maintain robust and scalable server-side applications using Node.js and JavaScript/TypeScript. Develop and consume RESTful APIs and integrate with third-party services. In-depth knowledge of AWS cloud including familiarity with services such as S3, Lambda, DynamoDB, Glue, Apache Airflow, SQS, SNS, ECS and Step Functions, EMR, EKS (Elastic Kubernetes Service), Key Management Service, Elastic MapReduce Handon Experience on Terraform Specializing in designing and developing fully automated end-to-end data processing pipelines for large-scale data ingestion, curation, and transformation. Experience in deploying Spark-based ingestion frameworks, testing automation tools, and CI/CD pipelines. Knowledge of unit testing frameworks and best practices. Working experience in databases- SQL and NO-SQL (preferred)-including joins, aggregations, window functions, date functions, partitions, indexing, and performance improvement ideas. Experience with database systems such as Oracle, MySQL, PostgreSQL, MongoDB, or other NoSQL databases. Familiarity with ORM/ODM libraries (e.g., Sequelize, Mongoose). Proficiency in using Git for version control. Understanding of testing frameworks (e.g., Jest, Mocha, Chai) and writing unit and integration tests. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Design and implement efficient database schemas and ensure data integrity. Write clean, well-documented, and testable code. Participate in code reviews to ensure code quality and adherence to coding standards. Troubleshoot and debug issues in development and production environments. Knowledge of security best practices for web applications (authentication, authorization, data validation). Strong communication and collaboration skills. Effective communication skills to interact with technical and non-technical stakeholders. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Skills: Node.Js RESTful (Rest-APIs) Terraform What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 5 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description We are looking for a highly motivated and self-driven Python Developer with a solid understanding of Python core concepts, Pytest framework, and hands-on experience with web scraping and WebDriver concepts. The ideal candidate should be comfortable working independently on enterprise-level projects and have strong proficiency in Google BigQuery, especially with complex joins. Experience with Looker Studio and a basic understanding of CI/CD pipelines is a plus. Key Responsibilities: Develop and maintain Python-based tools and applications. Write automated tests using Pytest, ensuring high code coverage and reliability. Implement and maintain robust web scraping solutions using WebDriver and other scraping tools. Write efficient BigQuery queries involving complex joins and aggregations for data analysis and reporting. Understand and contribute to large-scale enterprise projects with minimal supervision. Collaborate with cross-functional teams to gather and understand business requirements. Ensure best coding practices, code reviews, and maintainable codebase. Document technical designs, processes, and project insights. Requirements We are looking for a highly motivated and self-driven Python Developer with a solid understanding of Python core concepts, Pytest framework, and hands-on experience with web scraping and WebDriver concepts. The ideal candidate should be comfortable working independently on enterprise-level projects and have strong proficiency in Google BigQuery, especially with complex joins. Experience with Looker Studio and a basic understanding of CI/CD pipelines is a plus. Key Responsibilities: Develop and maintain Python-based tools and applications. Write automated tests using Pytest, ensuring high code coverage and reliability. Implement and maintain robust web scraping solutions using WebDriver and other scraping tools. Write efficient BigQuery queries involving complex joins and aggregations for data analysis and reporting. Understand and contribute to large-scale enterprise projects with minimal supervision. Collaborate with cross-functional teams to gather and understand business requirements. Ensure best coding practices, code reviews, and maintainable codebase. Document technical designs, processes, and project insights. Job responsibilities We are looking for a highly motivated and self-driven Python Developer with a solid understanding of Python core concepts, Pytest framework, and hands-on experience with web scraping and WebDriver concepts. The ideal candidate should be comfortable working independently on enterprise-level projects and have strong proficiency in Google BigQuery, especially with complex joins. Experience with Looker Studio and a basic understanding of CI/CD pipelines is a plus. Key Responsibilities: Develop and maintain Python-based tools and applications. Write automated tests using Pytest, ensuring high code coverage and reliability. Implement and maintain robust web scraping solutions using WebDriver and other scraping tools. Write efficient BigQuery queries involving complex joins and aggregations for data analysis and reporting. Understand and contribute to large-scale enterprise projects with minimal supervision. Collaborate with cross-functional teams to gather and understand business requirements. Ensure best coding practices, code reviews, and maintainable codebase. Document technical designs, processes, and project insights. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities: Test Strategy & Planning: Develop and implement robust test strategies, detailed test plans, and comprehensive test cases for ETL processes, data migrations, data warehouse solutions, and data lake implementations. Ab Initio ETL Testing: Execute functional, integration, regression, and performance tests for ETL jobs developed using Ab Initio Graphical Development Environment (GDE), Co>Operating System, and plans deployed via Control Center. Validate data transformations, aggregations, and data quality rules implemented within Ab Initio graphs. Spark Data Pipeline Testing: Perform hands-on testing of data pipelines and transformations built using Apache Spark (PySpark/Scala Spark) for large-scale data processing in batch and potentially streaming modes. Verify data correctness, consistency, and performance of Spark jobs from source to target. Advanced Data Validation & Reconciliation: Perform extensive data validation and reconciliation activities between source, staging, and target systems using complex SQL queries. Conduct row counts, sum checks, data type validations, primary key/foreign key integrity checks, and business rule validations. Data Quality Assurance: Identify, analyze, document, and track data quality issues, anomalies, and discrepancies across the data landscape. Collaborate closely with ETL/Spark developers, data architects, and business analysts to understand data quality requirements, identify root causes, and ensure timely resolution of defects. Documentation & Reporting: Create and maintain detailed test documentation, including test cases, test results, defect reports, and data quality metrics dashboards. Provide clear and concise communication on test progress, defect status, and overall data quality posture to stakeholders. Required Skills & Qualifications: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 3+ years of dedicated experience in ETL/Data Warehouse testing. Strong hands-on experience testing ETL processes developed using Ab Initio (GDE, Co>Operating System). Hands-on experience in testing data pipelines built with Apache Spark (PySpark or Scala Spark). Advanced SQL skills for data querying, validation, complex joins, and comparison across heterogeneous databases (e.g., Oracle, DB2, SQL Server, Hive, etc.). Solid understanding of ETL methodologies, data warehousing concepts (Star Schema, Snowflake Schema), and data modeling principles. Experience with test management and defect tracking tools (e.g., JIRA, Azure DevOps, HP ALM). Excellent analytical, problem-solving, and communication skills, with a keen eye for detail. Show more Show less
Posted 6 days ago
6.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Python Developer - Sr. Consultant T he position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. As a Data Engineer, you will be an integral member of our Data & Analytics team responsible for design and development of pipelines using cutting edge technologies . Work you’ll do Implementation of security and data protection Implementation of ETL pipelines for data from a wide variety of data sources using Python and SQL Delivering data and insights in Realtime Participate in architectural, design, and product sessions. Unit testing and debugging skills Collaborate with other developers, testers, and system engineers to ensure quality of deliverables and any product enhancements. Qualifications Required: 6-9 Years of technology Consulting experience Education: Bachelors/ Master’s degree in Computer Science / MCA / M.Sc / MBA A minimum of 2 Years of experience into Unit testing and debugging skills Excellent knowledge of Python programming language along with knowledge of at least one Python web framework (Django , Flask , FastAPI , Pyramid ) Extensive experience in Pandas/ Numpy dataframes , slicing, data wrangling, aggregations. Lambda Functions, Decorators. Vector operations on Pandas dataframes /series. Application of applymap , apply, map functions. Understanding on using a framework based on specific needs and requirements. Understanding of the threading limitations of Python, and multi-process architecture Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Primary Skills Python and data analysis libraries (Pandas, NumPy, SciPy). Django DS/Algo SQL (Read & Write) CRUD Awareness of Microservices Preferred: Good Understanding of fundamental design principles behind a scalable application Good Understanding of accessibility and security compliance Familiarity with event-driven programming in Python Proficient understanding of code versioning tools (Git , Mercurial or SVN) Knowledge of PowerShell and SQL Server You are familiar with big data technologies like Spark or Flink and comfortable working with web-scale datasets You have an eye for detail, good data intuition, and a passion for data quality G ood Knowledge of user authentication and authorization between multiple systems, servers, and environments You appreciate the importance of great documentation and data debugging skill Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300058 Show more Show less
Posted 6 days ago
3.0 - 6.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Python Developer - Consultant T he position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. As a Data Engineer, you will be an integral member of our Data & Analytics team responsible for design and development of pipelines using cutting edge technologies . Work you’ll do Implementation of security and data protection Implementation of ETL pipelines for data from a wide variety of data sources using Python and SQL Delivering data and insights in Realtime Participate in architectural, design, and product sessions. Unit testing and debugging skills Collaborate with other developers, testers, and system engineers to ensure quality of deliverables and any product enhancements. Qualifications Required: 3-6 Years of technology Consulting experience Education: Bachelors/ Master’s degree in Computer Science / MCA / M.Sc / MBA A minimum of 2 Years of experience into Unit testing and debugging skills Excellent knowledge of Python programming language along with knowledge of at least one Python web framework (Django , Flask , FastAPI , Pyramid ) Extensive experience in Pandas/ Numpy dataframes , slicing, data wrangling, aggregations. Lambda Functions, Decorators. Vector operations on Pandas dataframes /series. Application of applymap , apply, map functions. Understanding on using a framework based on specific needs and requirements. Understanding of the threading limitations of Python, and multi-process architecture Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Primary Skills Python and data analysis libraries (Pandas, NumPy, SciPy). Django DS/Algo SQL (Read & Write) CRUD Awareness of Microservices Preferred: Good Understanding of fundamental design principles behind a scalable application Good Understanding of accessibility and security compliance Familiarity with event-driven programming in Python Proficient understanding of code versioning tools (Git , Mercurial or SVN) Knowledge of PowerShell and SQL Server You are familiar with big data technologies like Spark or Flink and comfortable working with web-scale datasets You have an eye for detail, good data intuition, and a passion for data quality G ood Knowledge of user authentication and authorization between multiple systems, servers, and environments You appreciate the importance of great documentation and data debugging skill Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300054 Show more Show less
Posted 6 days ago
6.0 - 9.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Python Developer - Sr. Consultant T he position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. As a Data Engineer, you will be an integral member of our Data & Analytics team responsible for design and development of pipelines using cutting edge technologies . Work you’ll do Implementation of security and data protection Implementation of ETL pipelines for data from a wide variety of data sources using Python and SQL Delivering data and insights in Realtime Participate in architectural, design, and product sessions. Unit testing and debugging skills Collaborate with other developers, testers, and system engineers to ensure quality of deliverables and any product enhancements. Qualifications Required: 6-9 Years of technology Consulting experience Education: Bachelors/ Master’s degree in Computer Science / MCA / M.Sc / MBA A minimum of 2 Years of experience into Unit testing and debugging skills Excellent knowledge of Python programming language along with knowledge of at least one Python web framework (Django , Flask , FastAPI , Pyramid ) Extensive experience in Pandas/ Numpy dataframes , slicing, data wrangling, aggregations. Lambda Functions, Decorators. Vector operations on Pandas dataframes /series. Application of applymap , apply, map functions. Understanding on using a framework based on specific needs and requirements. Understanding of the threading limitations of Python, and multi-process architecture Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Primary Skills Python and data analysis libraries (Pandas, NumPy, SciPy). Django DS/Algo SQL (Read & Write) CRUD Awareness of Microservices Preferred: Good Understanding of fundamental design principles behind a scalable application Good Understanding of accessibility and security compliance Familiarity with event-driven programming in Python Proficient understanding of code versioning tools (Git , Mercurial or SVN) Knowledge of PowerShell and SQL Server You are familiar with big data technologies like Spark or Flink and comfortable working with web-scale datasets You have an eye for detail, good data intuition, and a passion for data quality G ood Knowledge of user authentication and authorization between multiple systems, servers, and environments You appreciate the importance of great documentation and data debugging skill Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300058 Show more Show less
Posted 6 days ago
6.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Python Developer - Sr. Consultant T he position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. As a Data Engineer, you will be an integral member of our Data & Analytics team responsible for design and development of pipelines using cutting edge technologies . Work you’ll do Implementation of security and data protection Implementation of ETL pipelines for data from a wide variety of data sources using Python and SQL Delivering data and insights in Realtime Participate in architectural, design, and product sessions. Unit testing and debugging skills Collaborate with other developers, testers, and system engineers to ensure quality of deliverables and any product enhancements. Qualifications Required: 6-9 Years of technology Consulting experience Education: Bachelors/ Master’s degree in Computer Science / MCA / M.Sc / MBA A minimum of 2 Years of experience into Unit testing and debugging skills Excellent knowledge of Python programming language along with knowledge of at least one Python web framework (Django , Flask , FastAPI , Pyramid ) Extensive experience in Pandas/ Numpy dataframes , slicing, data wrangling, aggregations. Lambda Functions, Decorators. Vector operations on Pandas dataframes /series. Application of applymap , apply, map functions. Understanding on using a framework based on specific needs and requirements. Understanding of the threading limitations of Python, and multi-process architecture Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Primary Skills Python and data analysis libraries (Pandas, NumPy, SciPy). Django DS/Algo SQL (Read & Write) CRUD Awareness of Microservices Preferred: Good Understanding of fundamental design principles behind a scalable application Good Understanding of accessibility and security compliance Familiarity with event-driven programming in Python Proficient understanding of code versioning tools (Git , Mercurial or SVN) Knowledge of PowerShell and SQL Server You are familiar with big data technologies like Spark or Flink and comfortable working with web-scale datasets You have an eye for detail, good data intuition, and a passion for data quality G ood Knowledge of user authentication and authorization between multiple systems, servers, and environments You appreciate the importance of great documentation and data debugging skill Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300058 Show more Show less
Posted 6 days ago
2.0 - 4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title : Mid-Level Data Analyst Location : Hyderabad, Telangana Job Type : On-Site Must be an immediate joiner Job Overview We are seeking a Mid-Level Data Analyst with strong SQL skills and proven analytical abilities to join our growing team. The ideal candidate will have a deep understanding of data manipulation and be skilled in providing actionable insights through data analysis. You will play a key role in transforming complex data into meaningful reports, dashboards, and visualizations that drive strategic decision-making. You will collaborate closely with cross-functional teams to identify business needs and contribute to optimizing operational efficiency. Key Responsibilities Data Analysis & Reporting : Analyze large datasets using SQL and other tools to identify trends, patterns, and insights that support business decision-making. SQL Querying : Write complex SQL queries to extract, manipulate, and summarize data from multiple databases. Optimize queries for performance and scalability. Cross-Functional Collaboration : Work closely with business units to understand their data needs and provide relevant insights for data-driven decisions. Education : Bachelor's degree in Data Science, Mathematics, Statistics, Computer Science, Business, or a related field. Experience : 2-4 years of experience in data analysis, data manipulation, or related fields. Technical Skills SQL : Strong proficiency in SQL for querying large datasets, including writing complex joins, subqueries, and aggregations. Analytical Skills : Strong problem-solving abilities focusing on identifying insights from data that drive business outcomes. Communication : Excellent written and verbal communication skills, with the ability to translate complex data findings into actionable insights for non-technical stakeholders. (ref:hirist.tech) Show more Show less
Posted 6 days ago
2.0 years
0 Lacs
India
On-site
About Ascendeum: We provide AdTech strategy consulting to leading internet websites and apps hosting over 200 million monthly audiences worldwide. Since 2015, our consultants and engineers have consistently delivered intelligent solutions that enable enterprise-level websites and apps to maximize their digital advertising returns. About the Role: We are seeking a highly analytical and detail-oriented Offshore Marketing & Data Analyst to support our growing analytics team. This role will focus on performance reporting, campaign analysis, and dashboard development across marketing channels. You will be responsible for transforming complex data into actionable insights and automated reporting for internal and client stakeholders. Key Responsibilities Collect, analyze, and interpret marketing performance data across paid media, website, and CRM platforms. Build and maintain dashboards in tools like Tableau or Looker for internal teams and client reporting. Use SQL to query structured data sources and generate custom views or data extracts. Work with Google Analytics 4 (GA4) and understand user journey behavior, conversion paths, and attribution logic. Interpret and analyze media metrics Collaborate with internal teams to support campaign tracking implementation and QA of data tags across platforms like Google Tag Manager . Assist in performance audits, pacing analysis, and campaign optimization recommendations. Build data pipelines or transformations using Python (basic scripting and automation). Support ad hoc requests for data and analysis. Required Skills and Qualifications 2+ years in a marketing analytics, business intelligence, or data analyst role. Proficiency in GA4 and understanding of media buying platforms (Google Ads, Meta Ads, DSPs, etc.). Hands-on experience with dashboarding tools such as Tableau, Looker, or Power BI. Strong understanding of media performance metrics and digital KPIs. Proficient in SQL for data extraction, joins, and aggregations. Familiarity with Python for data wrangling and automation. Understanding of tagging and tracking methodologies , including UTM parameters, pixels, and tag managers. Ability to QA marketing tracking setups and identify discrepancies in data. Strong communication and time management skills, with the ability to work autonomously. Salary Bracket: up to 25 LPA Thank you for your interest in joining Ascendeum. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Get to know Okta Okta is The World’s Identity Company. We free everyone to safely use any technology—anywhere, on any device or app. Our Workforce and Customer Identity Clouds enable secure yet flexible access, authentication, and automation that transforms how people move through the digital world, putting Identity at the heart of business security and growth. At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we’re looking for lifelong learners and people who can make us better with their unique experiences. Join our team! We’re building a world where Identity belongs to you. Senior Analytics Engineer We are looking for an experienced Analytics Engineer to join Okta’s enterprise data team. This analyst will have strong background in SaaS subscription and product analytics, a passion for providing customer usage insights to internal stakeholders, and experience organizing complex data into consumable data assets. In this role, you will be focusing on subscription analytics and product utilization insights and will partner with Product, Engineering, Customer Success, and Pricing to implement enhancements and build end-to-end customer subscription insights into new products. Requirements Experience in customer analytics, product analytics, and go-to-market analytics Experience in SaaS business, Product domain as well as Salesforce Proficiency in SQL, ETL tools, GitHub, and data integration technologies, including familiarity with data modeling techniques, database design, and query optimization. Experience in “data” languages like R and Python. Knowledge of data processing frameworks like PySpark is also beneficial. Experience working with cloud-based data solutions like AWS or Google Cloud Platform and cloud-based data warehousing tools like Snowflake. Strong analytical and problem-solving skills to understand complex data problems and provide effective solutions. Experience in building reports and visualizations to represent data in Tableau or Looker Ability to effectively communicate with stakeholders, and work cross-functionally and communicate with technical and non-technical teams Familiarity with SCRUM operating model and tracking work via a tool such as Jira 6+ years in data engineering, data warehousing, or business intelligence BS in computer science, data science, statistics, mathematics, or a related field Responsibilities Engage with Product and Engineering to implement product definitions into subscription and product analytics, building new insights and updates to existing key data products Analyze a variety of data sources, structures, and metadata and develop mapping, transformation rules, aggregations and ETL specifications Configure scalable and reliable data pipelines to consume, integrate and analyze large volumes of complex data from different sources to support the growing needs of subscription and product analytics Partner with internal stakeholders to understand user needs and implement user feedback, and develop reporting and dashboards focused on subscription analytics Work closely with other Analytics team members to optimize data self service, reusability, performance, and ensure validity of source of truth Enhance reusable knowledge of the models and metrics through documentation and use of the data catalog Ensure data security and compliance by implementing appropriate data access controls, encryption, and auditing mechanisms. Take ownership of successful completion for project activities Nice to Have Experience in data science, AI/ML concepts and techniques What you can look forward to as a Full-Time Okta employee! Amazing Benefits Making Social Impact Developing Talent and Fostering Connection + Community at Okta Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live. Find your place at Okta today! https://www.okta.com/company/careers/. Some roles may require travel to one of our office locations for in-person onboarding. Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws. If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation. Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Privacy Policy at https://www.okta.com/privacy-policy/. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Ascentt is building cutting-edge data analytics & AI/ML solutions for global automotive and manufacturing leaders. We turn enterprise data into real-time decisions using advanced machine learning and GenAI. Our team solves hard engineering problems at scale, with real-world industry impact. We’re hiring passionate builders to shape the future of industrial intelligence. Sr Tableau Developer 6+ years of experience in Tableau development JD Key Responsibilities Build and maintain complex Tableau dashboards with drill-down capabilities, filters, actions, and KPI indicators. Write advanced calculations like Level of Detail (LOD) expressions to address business logic such as aggregations at different dimensions. Design and implement table calculations for running totals, percent change, rankings, etc. Perform data blending and joins across multiple sources, ensuring data accuracy and integrity. Optimize Tableau workbook performance by managing extracts, minimizing dashboard load time, and tuning calculations. Use parameters, dynamic filters, and action filters for interactive user experiences. Design dashboard wireframes and prototypes using Tableau or other tools like Figma. Manage publishing, scheduling, and permissions in Tableau Server/Cloud. Collaborate with data engineering to design performant, scalable data sources. Document data logic, dashboard specs, and technical workflows for governance. Provide mentorship and technical guidance to junior Tableau developers. Experience in any BI Reporting Tool like PowerBI, Looker, Quicksight, Alteryx is a Plus Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, Analytics, or a related field 6+ years of experience in Tableau development Tableau Desktop Certified Professional (preferred) Experience with enterprise BI projects and stakeholder engagement SQL proficiency: Ability to write complex joins, CTEs, subqueries, and window functions. Experience working with large datasets in tools like: Snowflake, Amazon Redshift, Google BigQuery, Azure Synapse, or SQL Server Data preparation tools experience (preferred but not required): Tableau Prep, Alteryx, dbt, or equivalent Knowledge of Tableau Server/Cloud administration (publishing, permissions, data source refreshes) Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Role Overview We are seeking a skilled and motivated MERN Stack Developer to join our team. In this role, you will be responsible for developing, maintaining, and scaling dynamic web applications using the MERN stack—MongoDB, Express.js, React.js, and Node.js. You will collaborate closely with our design, marketing, and product teams to build seamless and interactive user experiences, from concept to deployment. Qualifications Experience: Minimum 3+ years of professional experience in MERN Stack development. Proven experience in developing, testing, and deploying full-stack applications with a strong portfolio of completed projects. Skills: Expertise in React.js, Redux or Context API, and component-driven architecture. Strong back-end development skills using Node.js and Express.js. Advanced knowledge of MongoDB, including indexes, aggregations, and schema design. Experience with JWT, OAuth2, or custom authentication and authorisation systems. Solid understanding of RESTful APIs, API versioning, and middleware integration. Proficient in handling file uploads, PDF generation, CSV exports, and audit logging. Familiarity with cloud platforms such as AWS, GCP, or Azure. Experience with CI/CD pipelines for automated testing and deployment. Proficient with Git and collaborative workflows using GitHub or GitLab. Ability to write clean, maintainable, and scalable code with strong attention to detail. Strong problem-solving and debugging skills. Key Responsibilities Develop and Maintain Applications: Build and maintain scalable web applications using the MERN stack. Collaborate Across Teams: Work closely with design, marketing, and product teams to transform ideas into robust technical solutions. Feature Implementation: Design and implement new features with a focus on usability, performance, and reliability. Performance Optimisation: Ensure applications are optimised for speed, responsiveness, and scalability. Maintain Code Quality: Write reusable, testable code and participate in code reviews to uphold high coding standards. Stay Updated: Keep up-to-date with emerging web technologies and best practices in MERN stack development. How To Apply If you are a motivated and results-oriented professional with a passion for business development, we would love to hear from you. Please send your resume to codedote@gmail.com . About The Company CodeDote is a profound Software Development company with an unswerving vision. We are young IT professionals based at Vadodara, India with innovative and alluring ideas catering to the needs of small and medium clients across the globe. We will help you fuel up your business strategies. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position: Sr. Tableau Developer Location: Pune/Indore Full Time Opportunity Experience: 6+ years of experience in Tableau development JD Key Responsibilities: Build and maintain complex Tableau dashboards with drill-down capabilities, filters, actions, and KPI indicators. Write advanced calculations like Level of Detail (LOD) expressions to address business logic such as aggregations at different dimensions. Design and implement table calculations for running totals, percent change, rankings, etc. Perform data blending and joins across multiple sources, ensuring data accuracy and integrity. Optimize Tableau workbook performance by managing extracts, minimizing dashboard load time, and tuning calculations. Use parameters , dynamic filters , and action filters for interactive user experiences. Design dashboard wireframes and prototypes using Tableau or other tools like Figma. Manage publishing, scheduling, and permissions in Tableau Server/Cloud . Collaborate with data engineering to design performant, scalable data sources. Document data logic, dashboard specs, and technical workflows for governance. Provide mentorship and technical guidance to junior Tableau developers. Experience in any BI Reporting Tool like Power BI, Looker, Quicksight, Alteryx is a Plus Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, Analytics, or a related field 6+ years of experience in Tableau development Tableau Desktop Certified Professional (preferred) Experience with enterprise BI projects and stakeholder engagement SQL proficiency : Ability to write complex joins, CTEs, subqueries, and window functions. Experience working with large datasets in tools like: Snowflake, Amazon Redshift, Google BigQuery, Azure Synapse, or SQL Server Data preparation tools experience (preferred but not required): Tableau Prep, Alteryx, dbt, or equivalent Knowledge of Tableau Server/Cloud administration (publishing, permissions, data source refreshes) Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka
On-site
About Schneider Electric Schneider Electric is the global specialist in energy management and automation. With revenues of ~€27 billion, our 135,000+ employees serve customers in over 100 countries, helping them to manage their energy and process in ways that are safe, reliable, efficient, and sustainable. From the simplest of switches to complex operational systems, our technology, software, and services improve the way our customers manage and automate their operations. Our connected technologies reshape industries, transform cities, and enrich lives. At Schneider Electric, we call this Life Is On. About Schneider Digital All IT needs for Schneider Electric are managed by a group called Schneider Digital; spread across 303 locations in 60 countries with over 2300 staff; along with various engagements with all major Global IT Service Providers. You will be part of the dynamic Schneider Digital – Employee Experience team whose mission is to “Create digital workplace of the future & Enable HR to build workforce of the future”. Within this team, you will be part of the Identity and Access Management team (IAM) to oversee the effective utilization, and continuous improvement of the IAM solutions. Role mission As an Identity and Access Management (IAM) Support Analyst, your role is crucial in ensuring the Security and Integrity of our organization's digital assets. Managing service levels, operations KPI’s, audit remediation actions and continuous improvements. The ideal candidate will be responsible for providing day-to-day support and troubleshooting for SailPoint IdentityIQ implementations, resolving issues, and coordinating with development and infrastructure teams to ensure platform stability and user access compliance. As a SailPoint Operations Analyst, you will support and maintain our Identity and Access Management (IAM) infrastructure, focusing on user access provisioning and issue resolution. Main responsibilities Monitor and maintain SailPoint IdentityIQ platform health and operations, addressing incidents, service requests, and failures related to provisioning, deprovisioning, and identity synchronization. Provide L2/L3 troubleshooting support for complex provisioning issues, including account creation errors, connector failures, role assignments, and onboarding/offboarding automation errors. Analyse audit logs, provisioning events, and identity lifecycle workflows to identify root causes of access failures, entitlement misassignments, and policy violations. Collaborate with end users and business teams to resolve access issues, analyse unusual identity behaviours, and address escalations related to approvals, certification, or segregation of duties (SOD) conflicts. Manage and troubleshoot identity-related jobs and scheduled tasks, including certification campaigns, policy violations, account aggregations, and connector synchronizations. Coordinate with development, infrastructure, and security teams to escalate and resolve performance issues, connector upgrades, and product enhancement bugs. Support version upgrades, patches, and environment migrations, including planning, testing, and executing changes in development, QA, and production environments. Implement patches, hotfixes, and SailPoint upgrades in accordance with change management policies while ensuring minimum downtime and post-upgrade testing validation. Support governance activities like access certifications, account reviews, and role-based access reviews (RBAR), ensuring accuracy and timely completion in compliance with audit schedules. Respond to and resolve alerts from identity system monitoring tools, including failures in scheduled jobs, policy violations, provisioning delays, and system availability warnings. Assist in onboarding and integrating new applications with SailPoint IdentityIQ, configuring connectors, defining provisioning rules, and validating entitlement mappings. Maintain and update technical documentation, SOPs, and knowledge base articles to ensure accuracy of support references and reduce resolution time for recurring issues. Collaborate with compliance and audit teams to ensure that identity processes align with internal security controls and external regulations such as SOX and GDPR. Participate in support of major deployments, ensuring availability of IAM services during critical maintenance windows and upgrades. Qualifications Skills and experience Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 4+ years of experience in IAM support with hands-on exposure to SailPoint IdentityIQ. Understanding of IAM concepts and ITSM workflows. Strong understanding of identity lifecycle management, RBAC, and access certifications. Familiarity with REST/SOAP APIs and SailPoint rules, workflows, and tasks. Scripting and debugging experience (JAVA/Bean shell preferred). Strong problem-solving skills and the ability to make decisions in a fast-paced environment. Strong communication and interpersonal skills, with the ability to collaborate effectively with users, professionals, and IT teams. Primary Location : IN-Karnataka-Bangalore Schedule : Full-time Unposting Date : Ongoing
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Skills: DAX, Data Modeling, Power Quality, ETL Tools, Snowflake, Data Lakes, SQL, Data Warehousing, Qualifications And Skills Extensive experience in data modeling techniques and practices, crucial for crafting scalable and optimized data models. Proficiency in Power BI development and administration to support robust reporting and analytics. Strong command of DAX (Mandatory skill) for developing intricate calculations and aggregations within Power BI solutions. Advanced expertise in handling Snowflake (Mandatory skill) for adept cloud-based data warehousing capabilities. Knowledge of Data Lakes (Mandatory skill) for managing large volumes of structured and unstructured data efficiently. Solid understanding of ETL tools, necessary for the efficient extraction, transformation, and loading of data. Exceptional SQL skills to design, query, and manage complex databases for data-driven decisions. Experience in data warehousing concepts and architectures to support structured and systematic data storage. Roles And Responsibilities Architect and implement cutting-edge Power BI solutions that transform business requirements into insightful dashboards and reports. Collaborate with cross-functional teams to gather and analyze data requirements, ensuring alignment with business objectives. Design and optimize data models using advanced techniques for improved performance and scalability in Power BI. Leverage DAX to create complex calculations and custom metrics, enhancing the depth and quality of analytical outputs. Utilize Snowflake to manage and optimize cloud-based data warehousing solutions for seamless data integration. Implement and administer data lakes to efficiently handle and store large datasets, both structured and unstructured. Ensure data accuracy, validity, and security through meticulous data quality checks and validation processes. Stay updated with the latest industry trends and best practices to continuously improve BI solutions and strategies. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights. As a Data Engineer II at JPMorgan Chase within the Consumer & Community Banking Team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Supports review of controls to ensure sufficient protection of enterprise data Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request Updates logical or physical data models based on new use cases Frequently uses SQL and understands NoSQL databases and their niche in the marketplace Adds to team culture of diversity, equity, inclusion, and respect Contributes to software and data engineering communities of practice and events that explore new and emerging technologies Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Required Qualifications, Capabilities, And Skills Formal training or certification on Data Engineering concepts and 3+ years applied experience in AWS and Kubernetes Proficiency in one or more large-scale data processing distributions such as JavaSpark/PySpark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on. Experience across the data lifecycle along with expertise with consuming data in any of: batch (file), near real-time (IBM MQ, Apache Kafka), streaming (AWS kinesis, MSK) Good at SQL (e.g., joins and aggregations) Working understanding of NoSQL databases Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis Experience customizing changes in a tool to generate product Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Experience designing and building REST API services using Java Exposure to cloud technologies - knowledge on Hybrid cloud architectures is highly desirable. AWS Developer/Solutions Architect Certification is highly desired About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference. As a Lead Data Engineer at JPMorgan Chase within the CCB (Connected Commerce), you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Architect and oversee the design of complex data solutions that meet diverse business needs and customer requirements. Guide the evolution of logical and physical data models to support emerging business use cases and technological advancements. Build and manage end-to-end cloud-native data pipelines in AWS, leveraging your hands-on expertise with AWS components. Build analytical systems from the ground up, providing architectural direction, translating business issues into specific requirements, and identifying appropriate data to support solutions. Work across the Service Delivery Lifecycle on engineering major/minor enhancements and ongoing maintenance of existing applications. Conduct feasibility studies, capacity planning, and process redesign/re-engineering of complex integration solutions. Help others build code to extract raw data, coach the team on techniques to validate its quality, and apply your deep data knowledge to ensure the correct data is ingested across the pipeline. Guide the development of data tools used to transform, manage, and access data, and advise the team on writing and validating code to test the storage and availability of data platforms for resilience. Oversee the implementation of performance monitoring protocols across data pipelines, coaching the team on building visualizations and aggregations to monitor pipeline health. Coach others on implementing solutions and self-healing processes that minimize points of failure across multiple product features. Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 5+ years applied experience Extensive experience in managing the full lifecycle of data, from collection and storage to analysis and reporting. Proficiency in one or more large-scale data processing distributions such as JavaSpark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on. Hands-on practical experience in system design, application development, testing, and operational stability Proficient in coding in one or more modern programming languages Should have good hands-on experience on AWS services and its components along with good understanding on Kubernetes. Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Strong understanding of domain driven design, micro-services patterns, and architecture Overall knowledge of the Software Development Life Cycle along with experience with IBM MQ, Apache Kafka Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, LLMs etc.) Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Experience designing and building REST API services using Java Exposure to cloud technologies - knowledge on Hybrid cloud architectures is highly desirable. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team. As a Data Engineer III at JPMorgan Chase within the Corporate Technology , you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Support the review of controls to ensure sufficient protection of enterprise data. Advise and make custom configuration changes in one to two tools to generate a product at the business or customer request. Update logical or physical data models based on new use cases. Frequently use SQL and understand NoSQL databases and their niche in the marketplace. Implement backup, recovery, and disaster recovery (DR) plans. Monitor database capacity, space, logs, and performance metrics. Manage database security, access control, and user privilege management. Conduct database health checks, troubleshoot, and resolve issues in real-time. Script using Shell and Python, and utilize database tools. Configure and maintain MongoDB replica sets, sharding, and failover mechanisms. Required Qualifications, Capabilities, And Skills Formal training or certification in software engineering concepts and 3+ years of applied experience. Experience across the data lifecycle Advanced at SQL (e.g., joins and aggregations) Working understanding of NoSQL databases Expertise in PostgreSQL & MongoDB DBA. Strong expertise in SQL, PL/pgSQL, and NoSQL (MongoDB queries, aggregation, and indexing). Hands-on experience with PostgreSQL replication, partitioning, and tuning. Experience managing MongoDB Atlas, Sharded Clusters, and Performance tuning. Familiarity with database monitoring tools such as Prometheus, Grafana, or CloudWatch. Strong knowledge of database security best practices and encryption techniques. Experience in automating DB tasks using Bash, Python, or Ansible. Preferred Qualifications, Capabilities, And Skills MongoDB Certified DBA PostgreSQL Professional Certification AWS Certified Database – Specialty / Data Engineer (preferred but not mandatory) Experience working with cloud-based databases (AWS RDS, Azure Cosmos DB, GCP Cloud SQL) is a plus. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We have an exciting and rewarding opportunity for you to take your Data engineering career to the next level. Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team. As a Data Engineer III at JPMorgan Chase within the Consumer & Community Banking Team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Supports review of controls to ensure sufficient protection of enterprise data Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request Updates logical or physical data models based on new use cases Frequently uses SQL and understands NoSQL databases and their niche in the marketplace Adds to team culture of diversity, equity, inclusion, and respect Contributes to software and data engineering communities of practice and events that explore new and emerging technologies Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Required Qualifications, Capabilities, And Skills Formal training or certification on Data Engineering concepts and 3+ years applied experience in AWS and Kubernetes Proficiency in one or more large-scale data processing distributions such as JavaSpark/PySpark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on. Experience across the data lifecycle along with expertise with consuming data in any of: batch (file), near real-time (IBM MQ, Apache Kafka), streaming (AWS kinesis, MSK) Advanced at SQL (e.g., joins and aggregations) Working understanding of NoSQL databases Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis Experience customizing changes in a tool to generate product Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Experience designing and building REST API services using Java Exposure to cloud technologies - knowledge on Hybrid cloud architectures is highly desirable. AWS Developer/Solutions Architect Certification is highly desired About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description You have the opportunity to unleash your full potential at a world-renowned company and take the lead in shaping the future of technology. As a Senior Manager of Data Engineering at JPMorgan Chase within the CCB, you serve in a leadership role by providing technical coaching and advisory for multiple technical teams, as well as anticipate the needs and potential dependencies of other data users within the firm. As an expert in your field, your insights influence budget and technical considerations to advance operational efficiencies and functionalities. Job Responsibilities Architect and oversee the design of complex data solutions that meet diverse business needs and customer requirements. Guide the evolution of logical and physical data models to support emerging business use cases and technological advancements. Build and manage end-to-end cloud-native data pipelines in AWS, leveraging your hands-on expertise with AWS components. Build analytical systems from the ground up, providing architectural direction, translating business issues into specific requirements, and identifying appropriate data to support solutions. Work across the Service Delivery Lifecycle on engineering major/minor enhancements and ongoing maintenance of existing applications. Help others build code to extract raw data, coach the team on techniques to validate its quality, and apply your deep data knowledge to ensure the correct data is ingested across the pipeline. Guide the development of data tools used to transform, manage, and access data, and advise the team on writing and validating code to test the storage and availability of data platforms for resilience. Oversee the implementation of performance monitoring protocols across data pipelines, coaching the team on building visualizations and aggregations to monitor pipeline health. Coach others on implementing solutions and self-healing processes that minimize points of failure across multiple product features. Adds to team culture of diversity, equity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 5+ years applied experience Extensive experience in managing the full lifecycle of data, from collection and storage to analysis and reporting. Proficiency in one or more large-scale data processing distributions such as JavaSpark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on. Hands-on practical experience in system design, application development, testing, and operational stability Proficient in coding in one or more modern programming languages Should have good hands-on experience with AWS services and its components along with good understanding on Kubernetes. Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Strong understanding of domain driven design, micro-services patterns, and architecture Overall knowledge of the Software Development Life Cycle along with experience with IBM MQ, Apache Kafka Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, LLMs etc.) Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Experience designing and building REST API services using Java Exposure to cloud technologies - knowledge on Hybrid cloud architectures is highly desirable. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights. As a Data Engineer II at JPMorgan Chase within the Consumer & Community Banking Team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job Responsibilities Organizes, updates, and maintains gathered data that will aid in making the data actionable Demonstrates basic knowledge of the data system components to determine controls needed to ensure secure data access Be responsible for making custom configuration changes in one to two tools to generate a product at the business or customer request Updates logical or physical data models based on new use cases with minimal supervision Adds to team culture of diversity, equity, inclusion, and respect Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Required Qualifications, Capabilities, And Skills Formal training or certification on Data Engineering concepts and 3+ years applied experience in AWS and Kubernetes Proficiency in one or more large-scale data processing distributions such as JavaSpark/PySpark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on. Experience across the data lifecycle along with expertise with consuming data in any of: batch (file), near real-time (IBM MQ, Apache Kafka), streaming (AWS kinesis, MSK) Advanced at SQL (e.g., joins and aggregations) Working understanding of NoSQL databases Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis Experience customizing changes in a tool to generate product Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Experience designing and building REST API services using Java Exposure to cloud technologies - knowledge on Hybrid cloud architectures is highly desirable. AWS Developer/Solutions Architect Certification is highly desired About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team. As a Data Engineer III at JPMorgan Chase within the Consumer & Community Banking Team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Supports review of controls to ensure sufficient protection of enterprise data Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request Updates logical or physical data models based on new use cases Frequently uses SQL and understands NoSQL databases and their niche in the marketplace Adds to team culture of diversity, equity, inclusion, and respect Contributes to software and data engineering communities of practice and events that explore new and emerging technologies Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Required Qualifications, Capabilities, And Skills Formal training or certification on Data Engineering concepts and 3+ years applied experience in AWS and Kubernetes Proficiency in one or more large-scale data processing distributions such as JavaSpark/PySpark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on. Experience across the data lifecycle along with expertise with consuming data in any of: batch (file), near real-time (IBM MQ, Apache Kafka), streaming (AWS kinesis, MSK) Advanced at SQL (e.g., joins and aggregations) Working understanding of NoSQL databases Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis Experience customizing changes in a tool to generate product Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Experience designing and building REST API services using Java Exposure to cloud technologies - knowledge on Hybrid cloud architectures is highly desirable. AWS Developer/Solutions Architect Certification is highly desired About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights. As a Data Engineer II at JPMorgan Chase within the Consumer & Community Banking Team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Supports review of controls to ensure sufficient protection of enterprise data Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request Updates logical or physical data models based on new use cases Frequently uses SQL and understands NoSQL databases and their niche in the marketplace Adds to team culture of diversity, equity, inclusion, and respect Contributes to software and data engineering communities of practice and events that explore new and emerging technologies Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Required Qualifications, Capabilities, And Skills Formal training or certification on Data Engineering concepts and 3+ years applied experience in AWS and Kubernetes Proficiency in one or more large-scale data processing distributions such as JavaSpark/PySpark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on. Experience across the data lifecycle along with expertise with consuming data in any of: batch (file), near real-time (IBM MQ, Apache Kafka), streaming (AWS kinesis, MSK) Good at SQL (e.g., joins and aggregations) Working understanding of NoSQL databases Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis Experience customizing changes in a tool to generate product Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Experience designing and building REST API services using Java Exposure to cloud technologies - knowledge on Hybrid cloud architectures is highly desirable. AWS Developer/Solutions Architect Certification is highly desired About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderābād
On-site
You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights. As a Data Engineer II at JPMorgan Chase within the Consumer & Community Banking Team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Supports review of controls to ensure sufficient protection of enterprise data Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request Updates logical or physical data models based on new use cases Frequently uses SQL and understands NoSQL databases and their niche in the marketplace Adds to team culture of diversity, equity, inclusion, and respect Contributes to software and data engineering communities of practice and events that explore new and emerging technologies Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Required qualifications, capabilities, and skills Formal training or certification on Data Engineering concepts and 3+ years applied experience in AWS and Kubernetes Proficiency in one or more large-scale data processing distributions such as JavaSpark/PySpark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on. Experience across the data lifecycle along with expertise with consuming data in any of: batch (file), near real-time (IBM MQ, Apache Kafka), streaming (AWS kinesis, MSK) Good at SQL (e.g., joins and aggregations) Working understanding of NoSQL databases Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis Experience customizing changes in a tool to generate product Preferred qualifications, capabilities, and skills Familiarity with modern front-end technologies Experience designing and building REST API services using Java Exposure to cloud technologies - knowledge on Hybrid cloud architectures is highly desirable. AWS Developer/Solutions Architect Certification is highly desired
Posted 1 week ago
0 years
4 - 6 Lacs
Hyderābād
On-site
Location Hyderabad, Telangana, India Category Technology Careers Job Id JREQ191891 Job Type Full time Hybrid As an employee at Thomson Reuters, you will play a role in shaping and leading the global knowledge economy. Our technology drives global markets and helps professionals around the world make decisions that matter. As the world’s leading provider of intelligent information, we want your unique perspective to create the solutions that advance our business and your career.Our Service Management function is transforming into a truly global, data and standards-driven organization, employing best-in-class tools and practices across all disciplines of Technology Operations. This will drive ever-greater stability and consistency of service across the technology estate as we drive towards optimal Customer and Employee experience. About the role: In this opportunity as Application Support Analyst, you will: Experience on Informatica support. The engineer will be responsible for supporting Informatica Development, Extractions, and loading. Fixing the data discrepancies and take care of performance monitoring. Collaborate with stakeholders such as business teams, product owners, and project management in defining roadmaps for applications and processes. Drive continual service improvement and innovation in productivity, software quality, and reliability, including meeting/exceeding SLAs. Thorough understanding of ITIL processes related to incident management, problem management, application life cycle management, operational health management. Experience in supporting applications built on modern application architecture and cloud infrastructure, Informatica PowerCenter/IDQ, Javascript frameworks and Libraries, HTML/CSS/JS, Node.JS, TypeScript, jQuery, Docker, AWS/Azure. About You: You're a fit for the role of Application Support Analyst - Informatica if your background includes: 3 to 8+ experienced Informatica Developer and Support will be responsible for implementation of ETL methodology in Data Extraction, Transformation and Loading. Have Knowledge in ETL Design of new or changing mappings and workflows with the team and prepares technical specifications. Should have experience in creating ETL Mappings, Mapplets, Workflows, Worklets using Informatica PowerCenter 10.x and prepare corresponding documentation. Designs and builds integrations supporting standard data warehousing objects (type-2 dimensions, aggregations, star schema, etc.). Should be able to perform source system analysis as required. Works with DBAs and Data Architects to plan and implement appropriate data partitioning strategy in Enterprise Data Warehouse. Implements versioning of the ETL repository and supporting code as necessary. Develops stored procedures, database triggers and SQL queries where needed. Implements best practices and tunes SQL code for optimization. Loads data from SF Power Exchange to Relational database using Informatica. Works with XML's, XML parser, Java and HTTP transformation within Informatica. Experience in Integration of various data sources like Oracle, SQL Server, DB2 and Flat Files in various formats like fixed width, CSV, Salesforce and excel Manage. Have in depth knowledge and experience in implementing the best practices for design and development of data warehouses using Star schema & Snowflake schema design concepts. Experience in Performance Tuning of sources, targets, mappings, transformations, and sessions Carried out support and development activities in a relational database environment, designed tables, procedures/Functions, Packages, Triggers and Views in relational databases and used SQL proficiently in database programming using SNFL Thousand Coffees Thomson Reuters café networking. #LI-VGA1 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2