Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Software Engineer, responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a Software Engineer you should have experience with: Demonstrable expertise with front-end and back-end skillsets. Java Proficiency (Java 17+) and Spring Ecosystem (Spring MVC, Data JPA, Security etc) with strong SQL and NoSQL integration expertise. React.js and javascript expertise : material UI, Ant design and state management expertise (Redus, Zustand or Context API). Strong knowledge of runtime (virtualisation, containers and Kubernetes) and expertise with test driven development using frameworks like cypress, playwright, selenium etc. Strong knowledge of CI/CD pipelines and tooling : Github Actions, Jenkins, Gitlab CI or similar. Monitoring and Observability - logging/tracing and alerting with knowledge of SRE integrations into opensource tooling like grafana/ELK etc. Some Other Highly Valued Skills May Include Expertise building ELT pipelines and cloud/storage integrations - data lake/warehouse integrations (redshift, BigQuery, snowflake etc). Expertise with security (OAuth2, CSRF/XSS protection), secure coding practice and Performance Optimization - JVM tuning, performance profiling, caching, lazy loading, rate limiting and high availability in large datasets. Expertise in Public, Private and Hybrid Cloud technologies (DC, AWS, Azure, GCP etc) and across broad Network domains (physical and wireless) – VXLAN/EVPN/WAN/SD-WAN/LAN/WLAN etc. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization’s technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 2 weeks ago
2.0 years
0 Lacs
Delhi, India
On-site
JOB_POSTING-3-72598-2 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 2 weeks ago
2.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
JOB_POSTING-3-72598-1 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB_POSTING-3-72598 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Key job responsibilities Metric Reporting Dashboard Development Design ETL pipelines Automation Experiment Design and Support Deep Dive Analysis Insight Generation Product Improvement Opportunity Identification Opportunity or Problem Sizing Support Anecdotal Audits etc. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, PowerBI, Quicksight, or similar tools Experience performing AB Testing, applying basic statistical methods (e.g. regression) to difficult business problems Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Track record of generating key business insights and collaborating with stakeholders Preferred Qualifications Experience in designing and implementing custom reporting systems using automation tools Knowledge of data modeling and data pipeline design Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ - H84 Job ID: A2876240
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JOB_POSTING-3-72598-4 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at our company, you will be an integral part of a skilled Data Engineering team focused on developing reusable capabilities and tools to automate various data processing pipelines. Your responsibilities will include contributing to data acquisition, ingestion, processing, monitoring pipelines, and validating data. Your role is pivotal in maintaining the smooth operation of data ingestion and processing pipelines, ensuring that data in the data lake is up-to-date, valid, and usable at all times. With a minimum of 3 years of experience in data engineering, you should be proficient in Python programming and have a strong background in working with both RDBMS and NoSQL systems. Experience in the AWS ecosystem, including components like Airflow, EMR, Redshift, S3, Athena, and PySpark, is essential. Additionally, you should have expertise in developing REST APIs using Python frameworks such as flask and fastapi. Familiarity with crawling libraries like BeautifulSoup in Python would be advantageous. Your skill in writing complex SQL queries to retrieve key metrics and working with various data lake storage formats will be key to your success in this role. Key Responsibilities: - Design and implement scalable data pipelines capable of handling large data volumes. - Develop ETL/ELT pipelines to extract data from upstream sources and synchronize it with data lakes in formats like parquet, iceberg, and delta. - Optimize and maintain data pipelines to ensure smooth operation and business continuity. - Collaborate with cross-functional teams to source data for various business use cases. - Stay informed about emerging data technologies and trends to enhance our data infrastructure and architecture continuously. - Adhere to best practices in data querying and manipulation to uphold data integrity. If you are a motivated Data Engineer with a passion for building robust data pipelines and ensuring data quality, we invite you to join our dynamic team and contribute to the success of our data engineering initiatives.,
Posted 2 weeks ago
1.0 - 3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
FanCode is India’s premier sports destination committed to giving fans a highly personalised experience across content and merchandise for a wide variety of sports. Founded by sports industry veterans Yannick Colaco and Prasana Krishnan in March 2019, FanCode has over 100 million users. It has partnered with domestic, international sports leagues and associations across multiple sports. In content, FanCode offers interactive live streaming for sports with industry-first subscription formats like Match, Bundle and Tour Passes, along with monthly and annual subscriptions at affordable prices. Through FanCode Shop, it also offers fans a wide range of sports merchandise for sporting teams, brands and leagues across the world. Technology @ FanCode We have one mission: Create a platform for all sports fans. Built by sports fans for sports fans, we cover Sports Live Video Streaming, Live Scores & Commentary, Video On Demand, Player Analytics, Fantasy Research, News, and very recently, e-Commerce. We’re at the beginning of our story and growing at an incredible pace. Our tech stack is hosted on AWS and GCP, built on Amazon EC2, CloudFront, Lambda, API Gateway, and Google Compute Engine, Cloud Functions, and Google Cloud Storage. We have a microservices-based architecture using Java, Node.js , Python, PHP, Redis, MySQL, Cassandra, and Elasticsearch as our end-to-end stack serving product features. As a data-driven team, we also use Python and other big data technologies for Machine Learning and Predictive Analytics. Additionally, we heavily use Kafka, Spark, Redshift, and BigQuery, and other cutting-edge technologies to keep improving FanCode's performance. You will be joining the Core Infra Engineering Team at FanCode, which runs a fresh, stable, and secure environment for our talented developers to thrive. Along with building a great foundation, this Core Infra team is also responsible for spreading their knowledge throughout the other teams, ensuring everyone takes advantage of the easy-to-use infrastructure, and applying best practices when it comes to Continuous Delivery, Containerization, Performance, Networking, and Security. Your Role Deploy solid, resilient Cloud Architectures by writing Infrastructure as Code automation tools. Design and implement the services and tools needed to manage and scale a service-oriented architecture, e.g., service discovery, config managers, container orchestration, and more. Build self-serve infrastructure automation, optimise deployment workflow at scale. Build and maintain a Compute Orchestration Platform using EC2 and GCE as the foundation. Develop and support tools for infrastructure, and evangelise best practices to be used by other engineers. Write code to implement networking and security at scale. Mentor and support engineers regarding development, concepts, and best practices. Must Haves: 1 to 3 years of relevant experience Proficient with at least one scripting language (Python or Bash) Strong Infrastructure fundamentals (preferably on AWS and GCP) Experience in containers and orchestration tools like Kubernetes (GKE/EKS) Good to Haves: Experience with implementing CI and CD pipelines using Jenkins, ArgoCD, Github Actions Experience using monitoring solutions like DataDog/ NewRelic, CloudWatch, ELK Stack, Prometheus/Grafana Experience with DevOps tools like Terraform, Ansible AWS, GCP, Azure certification(s) is a plus Previous experience of working in a startup Love for sports Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , India’s digital sports destination, and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/
Posted 2 weeks ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
As a Senior Data Modeller, you will be responsible for leading the design and development of conceptual, logical, and physical data models for enterprise and application-level databases. Your expertise in data modeling, data warehousing, and data governance, particularly in cloud environments, Databricks, and Unity Catalog, will be crucial for the role. You should have a deep understanding of business processes related to master data management in a B2B environment and experience with data governance and data quality concepts. Your key responsibilities will include designing and developing data models, translating business requirements into structured data models, defining and maintaining data standards, collaborating with cross-functional teams to implement models, analyzing existing data systems for optimization, creating entity relationship diagrams and data flow diagrams, supporting data governance initiatives, and ensuring compliance with organizational data policies and security requirements. To be successful in this role, you should have at least 12 years of experience in data modeling, data warehousing, and data governance. Strong familiarity with Databricks, Unity Catalog, and cloud environments (preferably Azure) is essential. Additionally, you should possess a background in data normalization, denormalization, dimensional modeling, and schema design, along with hands-on experience with data modeling tools like ERwin. Experience in Agile or Scrum environments, proficiency in integration, databases, data warehouses, and data processing, as well as a track record of successfully selling data and analytics software to enterprise customers are key requirements. Your technical expertise should cover Big Data, streaming platforms, Databricks, Snowflake, Redshift, Spark, Kafka, SQL Server, PostgreSQL, and modern BI tools. Your ability to design and scale data pipelines and architectures in complex environments, along with excellent soft skills including leadership, client communication, and stakeholder management will be valuable assets in this role.,
Posted 2 weeks ago
3.0 years
0 Lacs
India
On-site
Data Engineer About RevX RevX helps app businesses acquire and reengage users via programmatic to retain, monetize, and accelerate revenue. We're all about taking your app businesses to a new growth level. We rely on data science, innovative technology, and AI, and a skilled team, to create and deliver seamless ad experiences to delight your app users. That’s why RevX is the ideal partner for app marketers that demand trustworthy insights, a hands-on team, and a commitment to growth. We help you build sound mobile strategies, combining programmatic UA, app re engagement, and performance branding to drive real and verifiable results so you can scale your business: with real users, high retention, and incremental revenue. About the Role We are seeking a forward-thinking Data Engineer who can bridge the gap between traditional data pipelines and modern Generative AI (GenAI) -enabled analytics tools. You'll design intelligent internal analytics systems using SQL, automation platforms like n8n , BI tools like Looker , and GenAI interfaces such as ChatGPT , Gemini , or LangChain . This is a unique opportunity to innovate at the intersection of data engineering , AI , and product analytics . Key Responsibilities Design, build, and maintain analytics workflows/tools leveraging GenAI platforms (e.g., ChatGPT, Gemini etc.) and automation tools (e.g., n8n , Looker etc.). Collaborate with product, marketing, and engineering teams to identify and deliver data-driven insights. Use SQL to query data from data warehouses (BigQuery, Redshift, Snowflake, etc.) and transform it for analysis or reporting. Build automated reporting and insight generation systems using visual dashboards and GenAI-based interfaces. Evaluate GenAI tools and APIs for applicability in data analytics workflows. Explore use cases where GenAI can assist in natural language querying , automated summarization , and explanatory analytics . Work closely with business teams to enable self-service analytics via intuitive GenAI-powered interfaces. Design and maintain robust data pipelines to ensure timely and accurate ingestion, transformation, and availability of data across systems. Implement best practices in data modeling, testing, and monitoring to ensure data quality and reliability in analytics workflows. Requirements 3+ years of experience in data analysis or a related field. Strong proficiency in SQL with the ability to work across large datasets. Hands-on experience building data tools/workflows using any of the following: n8n , Looker/LookML , ChatGPT API , Gemini , LangChain , or similar. Familiarity with GenAI concepts , LLMs, prompt engineering, and their practical application in data querying and summarization. Excellent problem-solving skills and a mindset to automate and optimize wherever possible. Strong communication skills with the ability to translate complex data into actionable insights for non-technical stakeholders. Nice to Have Prior experience in AdTech (ad operations, performance marketing, attribution, audience insights, etc.). Experience with Python , Jupyter Notebooks , or scripting for data manipulation. Familiarity with cloud platforms like Google Cloud Platform (GCP) or AWS . Knowledge of data visualization tools like Tableau , Power BI , or Looker etc. Why Join Us? Work on the cutting edge of GenAI and data analytics innovation . Contribute to building scalable analytics tools that empower entire teams. Be part of a fast-moving, experimentation-driven culture where your ideas matter. For more information visit www.revx.io
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
As an experienced professional with 57 years of expertise, you will contribute to the seamless replication and integration of data from various source systems such as Oracle EBS, PLM, Retail, and SQL Server to Redshift and BI platforms. Your role will involve utilizing tools like GoldenGate, Qlik Replicate, and ODI to ensure efficient data transfer. Your key responsibilities will include configuring, monitoring, and troubleshooting Oracle GoldenGate replication pipelines for EBS, PLM, and BI systems. You will also be responsible for administering Qlik Replicate jobs and monitoring data synchronization from Oracle/SQL Server to AWS Redshift. Additionally, you will manage ODI interfaces for batch ETL processes that feed the data lake or reporting systems. It will be crucial for you to ensure data integrity, adhere to latency SLAs, and plan for failure recovery in replication and transformation pipelines. Collaboration with BI teams to resolve source-side data issues and support schema evolution planning will also be a part of your responsibilities. To excel in this role, you must possess expertise in Oracle GoldenGate for both active-active and active-passive replication scenarios. You should be familiar with Qlik Replicate and Redshift pipelines. Proficiency in ODI interface development and job orchestration is essential. Moreover, having familiarity with data integrity validation, performance tuning, and log-based replication will be beneficial for this position.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You are looking for an AWS Serverless Developer to join our team in Bengaluru with 3-5 years of experience. As an AWS Serverless Developer, your role will involve designing, developing, and maintaining serverless data pipelines using AWS Lambda, AWS Glue, I AM Roles & Policies, RDS, and other AWS services to ensure scalability and efficiency. You will be responsible for building Audit & logging processes, ensuring data quality and governance, implementing CICD and Git repo for code versioning, and developing new features with testing. Collaboration with the Lead Developer, Business Analyst, and application Architect is essential to align on project goals. You will troubleshoot issues, perform bug fixing, and maintain documentation for applications, architecture, and processes. Working with a distributed team following Agile methodologies is crucial for timely project delivery. To be successful in this role, you should have 3-5 years of software development experience focusing on AWS Lambda, I AM Roles & Policies, RDS, and other AWS services. Proficiency in PySpark or Python programming languages, CloudFormation Templates, Git for version control, and hands-on experience with AWS Aurora and/or Redshift are required. Experience in Data engineering topics such as AWS Glue and Data pipelines development, API data interfacing over API Gateway, strong analytical, problem-solving, and debugging skills, as well as excellent organizational skills for task prioritization based on project needs are key qualifications for this position.,
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
Title: L2 Application Support Engineer – Enterprise Data Warehouse (AWS) Job Location: remote Experience 3 -5 yearsNeed immediate joiners Work Schedule:24x7 rotational shifts, weekend support, with on-call availability for high-priority batch or data issues. Key Responsibilities ETL & Data Pipeline Monitoring Monitor daily ETL jobs and workflows using tools like Apache Airflow, AWS Glue, or Talend. Handle data ingestion failures, perform initial triage and escalate to L3 or data engineering if needed. Maintain job runbooks, batch schedules, and alert configurations. Incident & Problem Management Provide L2-level troubleshooting and resolution for data warehouse issues and outages. Log incidents and service requests in ITSM tools (e.g., ServiceNow, Jira). Perform root cause analysis (RCA) and create post-incident reports. Data Quality & Validation Run and validate data quality checks (e.g., nulls, mismatches, record counts). Ensure integrity of data ingestion from source systems (e.g., Finacle, UPI, CRM). Collaborate with business analysts and QA teams to confirm expected outputs. Cloud Operations & Automation Monitor AWS services (Redshift, S3, Glue, Lambda, CloudWatch, Athena) and respond to alerts. Automate recurring support tasks using Python, Shell scripting, or Lambda triggers. Work closely with cloud DevOps or engineering teams for patching, scaling, or performance tuning. Reporting & Documentation Maintain support documentation, knowledge base articles, and daily handover reports. Assist in preparing monthly uptime/SLA reports. Participate in audit reviews and ensure compliance logs are retained. Technical Skills AWS Services: Redshift, S3, Glue, Athena, Lambda, CloudWatch ETL Tools: AWS Glue, Apache Airflow, Talend Scripting: Python, Shell, SQL (advanced) Monitoring: AWS CloudWatch, Grafana, Prometheus (optional) ITSM: ServiceNow, Jira Database: PostgreSQL, Redshift SQL, MySQL Security: IAM policies, data masking, audit logs Soft Skills & Functional Knowledge Good understanding of data warehousing and BI principles. Excellent communication skills to liaise with business, operations, and L3 teams. Analytical thinking and a proactive problem-solving approach. Ability to handle high-pressure production issues in real-time. Preferred Certifications AWS Certified Data Analytics – Specialty (preferred) AWS Certified Solutions Architect – Associate ITIL Foundation (for incident/change processes)
Posted 2 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 2 weeks ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a Data Engineer, you will play a crucial role in designing, building, and maintaining the data infrastructure and systems required for efficient and reliable data processing. You will collaborate with cross-functional teams, including data scientists, analysts, to ensure the availability, integrity, and accessibility of data for various business needs. This role requires a strong understanding of data management principles, database technologies, data integration, and data warehousing concepts. Key Responsibilities Develop and maintain data warehouse solutions, including data modeling, schema design, and indexing strategies Optimize data processing workflows for improved performance, reliability, and scalability Identify and integrate diverse data sources, both internal and external, into a centralized data platform Implement and manage data lakes, data marts, or other storage solutions as required Ensure data privacy and compliance with relevant data protection regulations Define and implement data governance policies, standards, and best practices Transform raw data into usable formats for analytics, reporting, and machine learning purposes Perform data cleansing, normalization, aggregation, and enrichment operations to enhance data quality and usability Collaborate with data analysts and data scientists to understand data requirements and implement appropriate data transformations What You'll Bring Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field Proficiency in SQL and experience with relational databases (e.g., Snowflake, MySQL, PostgreSQL, Oracle) 3+ years of experience in data engineering or a similar role Hands-on programming skills in languages such as Python or Java is a plus Familiarity with cloud-based data platforms (e.g., AWS, Azure, GCP) and related services (e.g., S3, Redshift, BigQuery) is good to have Knowledge of data modeling and database design principles Familiarity with data visualization tools (e.g., Tableau, Power BI) is a plus Strong problem-solving and analytical skills with attention to detail Experience with HR data analysis and HR domain knowledge is preferred Who You'll Work With As part of the People Analytics team, you will modernize HR platforms, capabilities & engagement, automate/digitize core HR processes and operations and enable greater efficiency. You will collaborate with the global people team and colleagues across BCG to manage the life cycle of all BCG employees. The People Management Team (PMT) is comprised of several centers of expertise including HR Operations, People Analytics, Career Development, Learning & Development, Talent Acquisition & Branding, Compensation, and Mobility. Our centers of expertise work together to build out new teams and capabilities by sourcing, acquiring and retaining the best, diverse talent for BCG’s Global Services Business. We develop talent and capabilities, while enhancing managers’ effectiveness, and building affiliation and engagement in our new global offices. The PMT also harmonizes process efficiencies, automation, and global standardization. Through analytics and digitalization, we are always looking to expand our PMT capabilities and coverage Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
Posted 2 weeks ago
10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Are you insatiably curious, deeply passionate about the realm of databases and analytics, and ready to tackle complex challenges in a dynamic environment in the era of AI? If so, we invite you to join our team as a Cloud & AI Solution Engineer in Innovative Data Platform for commercial customers at Microsoft. Here, you'll be at the forefront of innovation, working on cutting-edge projects that leverage the latest technologies to drive meaningful impact. Join us and be part of a team that thrives on collaboration, creativity, and continuous learning. Databases & Analytics is a growth opportunity for Microsoft Azure, as well as its partners and customers. It includes a rich portfolio of products including IaaS and PaaS services on the Azure Platform in the age of AI. These technologies empower customers to build, deploy, and manage database and analytics applications in a cloud-native way. As an Innovative Data Platform Solution Engineer (SE), you will play a pivotal role in helping enterprises unlock the full potential of Microsoft’s cloud database and analytics stack across every stage of deployment. You’ll collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics, through hands-on engagements like Proof of Concepts, hackathons, and architecture workshops. This opportunity will allow you to accelerate your career growth, develop deep business acumen, hone your technical skills, and become adept at solution design and deployment. You’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform, all while enjoying flexible work opportunities. As a trusted technical advisor, you’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform. Responsibilities Drive technical sales with decision makers using demos and PoCs to influence solution design and enable production deployments. Lead hands-on engagements—hackathons and architecture workshops—to accelerate adoption of Microsoft’s cloud platforms. Build trusted relationships with platform leads, co-designing secure, scalable architectures and solutions Resolve technical blockers and objections, collaborating with engineering to share insights and improve products. Maintain deep expertise in Analytics Portfolio: Microsoft Fabric (OneLake, DW, real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums Qualifications 10+ years technical pre-sales or technical consulting experience OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical pre-sales or technical consulting experience OR Master's Degree in Computer Science, Information Technology, or related field AND 3+ year(s) technical pre-sales or technical consulting experience OR equivalent experience Expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) from migration & modernize and creating new AI apps. Expert on Azure Analytics (Fabric, Azure Databricks, Purview) and competitors (BigQuery, Redshift, Snowflake) in data warehouse, data lake, big data, analytics, real-time intelligent, and reporting using integrated Data Security & Governance. Proven ability to lead technical engagements (e.g., hackathons, PoCs, MVPs) that drive production-scale outcomes. 6+ years technical pre-sales, technical consulting, or technology delivery, or related experience OR equivalent experience 4+ years experience with cloud and hybrid, or on premises infrastructure, architecture designs, migrations, industry standards, and/or technology management Proficient on data warehouse & big data migration including on-prem appliance (Teradata, Netezza, Oracle), Hadoop (Cloudera, Hortonworks) and Azure Synapse Gen2. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 3 weeks ago
0 years
0 - 0 Lacs
Gurgaon, Haryana, India
On-site
About Us KlearNow.AI digitizes and contextualizes unstructured trade documents to create shipment visibility, business intelligence, and advanced analytics for supply chain stakeholders. It provides unparalleled transparency and insights, empowering businesses to operate efficiently. We futurize supply chains with AI&ML-powered collaborative digital platforms created from ingesting required trade documentation without the pain of complex integrations. We achieve our goals by assembling a team of the best talents. As we expand, it's crucial to maintain and strengthen our culture, which places a high value on our people and teams. Our collective growth and triumphs are intrinsically linked to the success and well-being of every team member. OUR VISION To futurize global trade, empowering people and optimizing processes with AI-powered clarity. YOUR MISSION As part of a diverse, high-energy workplace, you will challenge the status quo of supply chain operations with your knack for engaging clients and sharing great stories. KlearNow is operational and a certified Customs Business provider in US, Canada, UK, Spain and Netherlands with plans to grow in many more markets in near future. About Us KlearNow.AI digitizes and contextualizes unstructured trade documents to create shipment visibility, business intelligence, and advanced analytics for supply chain stakeholders. It provides unparalleled transparency and insights, empowering businesses to operate efficiently. We futurize supply chains with AI&ML-powered collaborative digital platforms created from ingesting required trade documentation without the pain of complex integrations. We achieve our goals by assembling a team of the best talents. As we expand, it's crucial to maintain and strengthen our culture, which places a high value on our people and teams. Our collective growth and triumphs are intrinsically linked to the success and well-being of every team member. OUR VISION To futurize global trade, empowering people and optimizing processes with AI-powered clarity. YOUR MISSION As part of a diverse, high-energy workplace, you will challenge the status quo of supply chain operations with your knack for engaging clients and sharing great stories. KlearNow is operational and a certified Customs Business provider in US, Canada, UK, Spain and Netherlands with plans to grow in many more markets in near future. Join our vibrant and forward-thinking team at KlearNow.ai as we continue to push the boundaries of AI/ML technology. We offer a competitive salary, flexible work arrangements, and ample opportunities for professional growth. We are committed to diversity, equality and inclusion. If you are passionate about shaping the future of logistics and supply chain and making a difference, we invite you to apply . Join our vibrant and forward-thinking team at KlearNow.ai as we continue to push the boundaries of AI/ML technology. We offer a competitive salary, flexible work arrangements, and ample opportunities for professional growth. We are committed to diversity, equality and inclusion. If you are passionate about shaping the future of logistics and supply chain and making a difference, we invite you to apply . Business Analyst - Data Science & Business Intelligence Location: India Employment Type: Full-time The Role: Join our Data & Analytics team as a Business Analyst where you'll transform data from our modern data warehouse into actionable business insights and strategic recommendations. You'll work with advanced analytics tools and techniques to create compelling reports, dashboards, and predictive models that drive data-driven decision making across the organization. Key Responsibilities: Analyze data from cloud data warehouses (like Amazon Redshift) to identify business trends and opportunities Create interactive dashboards and reports using Business Intelligence platforms(like ThoughtSpot, PowerBI) Develop statistical models and perform predictive analytics using tools (like Python, R) Collaborate with stakeholders to understand business requirements and translate them into analytical solutions Design and implement KPIs, metrics, and performance indicators for various business functions Conduct ad-hoc analysis to support strategic business decisions and initiatives Present findings and recommendations to leadership through compelling data visualizations Monitor and troubleshoot existing reports and dashboards to ensure accuracy and performance Ensure data quality and consistency in all analytical outputs and reporting Support business teams with self-service analytics training and best practices Required Qualifications: Strong analytical and problem-solving skills with business acumen Experience with Business Intelligence tools and dashboard creation Proficiency in data analysis using programming languages (like Python, R) or advanced Excel Experience querying cloud data warehouses and relational databases Strong data visualization and storytelling capabilities Experience with statistical analysis and basic predictive modeling Preferred Qualifications: Experience with advanced BI platforms (like ThoughtSpot) is a significant advantage Machine learning and advanced statistical modeling experience Experience with modern analytics tools and frameworks Advanced data visualization and presentation skills Experience with business process optimization and data-driven strategy
Posted 3 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Software Engineer II at JPMorganChase within the Consumer and Community Banking, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job Responsibilities Work with Cloud Architect to identify data components and process flows Design and Develop data ingestion processes into Hadoop/AWS Platform Collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy Identify, analyze, and interpret trends or patterns in complex data sets Innovate new ways of managing, transforming and validating data Establish and enforce guidelines to ensure consistency, quality and completeness of data assets Apply quality assurance best practices to all work products Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 2+ years applied experience Experience in a Big Data technologies (Spark, Glue, Hive, Redshift, Kafka, etc.) Experience programming in Python/JAVA Experience performing data analysis (NOT DATA SCIENCE) on AWS platforms Experience with data management process on AWS is a huge Plus Experience in implementing complex ETL transformations on big data platform like NoSQL databases (Mongo, DynamoDB, Cassandra) Familiarity with relational database environment (Oracle, Teradata, etc.) leveraging databases, tables/views, stored procedures, agent jobs, etc. Strong development discipline and adherence to best practices and standards Demonstrated independent problem solving skills and ability to develop solutions to complex analytical/data-driven problems Experience of working in a development teams using agile techniques ABOUT US
Posted 3 weeks ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Key Attributes: ü - Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform. Design/ implement, and maintain the data architecture for all AWS data services ü - A strong understanding of data modelling, data structures, databases (Redshift), and ETL processes ü - Work with stakeholders to identify business needs and requirements for data-related projects ü - Strong SQL and/or Python or PySpark knowledge ü - Creating data models that can be used to extract information from various sources & store it in a usable format ü - Optimize data models for performance and efficiency ü - Write SQL queries to support data analysis and reporting ü - Monitor and troubleshoot data pipelines ü - Collaborate with software engineers to design and implement data-driven features ü - Perform root cause analysis on data issues ü - Maintain documentation of the data architecture and ETL processes ü - Identifying opportunities to improve performance by improving database structure or indexing methods ü - Maintaining existing applications by updating existing code or adding new features to meet new requirements ü - Designing and implementing security measures to protect data from unauthorized access or misuse ü - Recommending infrastructure changes to improve capacity or performance ü - Experience in Process industry Mandatory skill sets: Data Modelling, AWS, ETL Preferred skill sets: Data Modelling, AWS, ETL Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills ETL Development Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 3 weeks ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB_POSTING-3-72576 Job Description Role Title : Analyst, Analytics - Data Quality Developer(L08) Company Overview : Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles Organizational Overview Our Analytics organization comprises of data analysts who focus on enabling strategies to enhance customer and partner experience and optimize business performance through data management and development of full stack descriptive to prescriptive analytics solutions using cutting edge technologies thereby enabling business growth. Role Summary/Purpose The Analyst, Analytics - Data Quality Developer (Individual Contributor) role is located in the India Analytics Hub (IAH) as part of Synchrony’s enterprise Data Office. This role will be responsible for the proactive design, implementation, execution, and monitoring of Data Quality process capabilities within Synchrony’s Public and Private cloud and on-prem environments within the Chief Data Office. The Data Quality Developer – Analyst will work within the IT organization to support and participate in build and run activities and environment (e.g. DevOps) for Data Quality. Key Responsibilities Monitor and maintain Data Quality and Data Issue Management operating level agreements in support of data quality rule execution and reporting Assist in performing root cause analysis for data quality issues and data usage challenges, particularly for the workload migration to the public cloud. Recommend, design, implement and refine / remediate data quality specifications within Synchrony’s approved Data Quality platforms Participate in the solution design of data quality and data issue management technical and procedural solutions, including metric reporting Work closely with Technology teams and key stakeholders to ensure the data quality issues are prioritized, analyzed and addressed Regularly communicate the states of data quality issues and progress to key stakeholders Participate in the planning and execution of agile release cycles and iterations Qualifications/Requirements Minimum of 1 years’ experience in data quality management, including implementing data quality rules, data profiling and root cause analysis for data issues, with exposure to cloud environments (AWS, Azure, or Google Cloud) and on-premise infrastructure. Minimum of 1 years’ experience with data quality or data integration tools such as Ab Initio, Informatica, Collibra, Stonebranch or Tableau, gained through hands-on experience or projects. Good communication and collaboration skills, strong analytical thinking and problem-solving abilities, ability to work independently and manage multiple tasks, and attention to detail. Desired Characteristics Broad understanding of banking, credit card, payment solutions, collections, marketing, risk and regulatory & compliance. Experience using data governance and data quality tools such as: Collibra, Ab Initio Express>IT; Ab Initio MetaHub. Proficient in writing / understanding SQL. Experience querying/analyzing data in cloud-based environments (e.g, AWS, Redshift) AWS certifications such as AWS Cloud practitioner, AWS Certified Data Analytics – Specialty Intermediate to advanced MS Office Suite skills including Power Point, Excel, Access, Visio. Strong relationship management and influencing skills to build enduring and productive alliances across matrix organizations. Demonstrated success in managing multiple deliverables concurrently often within aggressive timeframes; ability to cope under time pressure. Experience in partnering with a diverse team composed of staff and consultants located in multiple locations and time zones. Eligibility Criteria: Bachelor’s Degree, preferably in Engineering or Computer Science with more than 1 years’ hands-on Data Management experience or in lieu of a degree with more than 3 years’ experience. Work Timings: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (Formal/Final Formal) or PIP L4 to L7 Employees who have completed 12 months in the organization and 12 months in their current role and level are eligible. L8+ Employees who have completed 18 months in the organization and 12 months in their current role and level are eligible. Grade/Level: 08 Job Family Group Information Technology
Posted 3 weeks ago
8.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description About the Role Nielsen is seeking an organized, detail oriented, team player, to join the ITAM Back Office Engineering team in the role of Software Engineer. Nielsen’s Audience Measurement Engineering platforms support the measurement of television viewing in more than 30 countries around the world. Ideal candidates will have exceptional skills in programming, testing, debugging and problem solving as well as effective communication and writing skills. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 8 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Python , SQL and databases such as Postgres Experience in development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other Desirable Skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 3 weeks ago
2.0 - 4.0 years
9 Lacs
India
On-site
Data Engineer Experience: 2-4Years Location: Kochin, Kerala (Work From Office) Key Responsibilities: Build and manage data lakes and data warehouses using services like Amazon S3, Redshift, and Athena Design and build secure, scalable, and efficient ETL/ELT pipelines on AWS using services like Glue, Lambda, Step Functions Work on SAP Datasphere to build and maintain Spaces, Data Builders, Views, and Consumption Layers Support data integration between AWS, Datasphere, and various source systems(SAP S4HANA, Non-SAP apps, Flat-files etc) Develop and maintain scalable data models and optimize queries for performance · Monitor and optimize data workflows to ensure reliability, performance, and cost-efficiency Collaborate with Data Analysts and BI teams to provide clean, validated, and well-documented datasets Monitor, troubleshoot, and enhance data workflows and pipelines Ensure data quality, integrity, and governance policies are met Required Skills Strong SQL skills and experience with relational databases like MySQL, or SQL Server Proficient in Python or Scala for data transformation and scripting Familiarity with cloud platforms like AWS (S3, Redshift, Glue), Datasphere, Azure Good-to-Have Skills: AWS Certification – AWS Certified Data Analytics Exposure to modern data stack tools like Snowflake Experience in cloud-based projects and working in an Agile environment Understanding of data governance, security best practices, and compliance standards Job Types: Full-time, Permanent Pay: Up to ₹960,000.00 per year Application Question(s): Willing to take up Work from Office mode in Kochi Location? Experience: Data Engineer / ETL Developer: 2 years (Required) AWS: 2 years (Required) SQL and (Python OR Scala): 2 years (Required) Datasphere OR "SAP BW" OR "SAP S/4HANA": 2 years (Required) AWS (S3, Redshift, Glue), Datasphere, Azure: 2 years (Required) PostgreSQL and MySQL or SQL Server: 2 years (Required)
Posted 3 weeks ago
8.0 years
20 Lacs
Hyderābād
On-site
Job Title: Senior Database Administrator Job Type: Full Time Experience Required: 8+ Years Job Description : We are seeking an experienced and strategic Senior Database Administrator (DBA) with deep expertise in SQL/MySQL , AWS Redshift , and infrastructure automation using Terraform . This role requires someone who can design scalable data solutions, lead database optimization efforts, and support modern data platforms in a cloud-native environment. Key Responsibilities: Design, deploy, and manage highly available, scalable databases — with a strong emphasis on SQL, MySQL , and AWS Redshift . Implement and maintain infrastructure as code using Terraform for automating database and AWS infrastructure provisioning. Optimize performance and reliability across relational and NoSQL databases including Redshift, MySQL, SQL Server, DynamoDB, and Neo4j . Lead data platform integration efforts with applications developed in Node.js and other backend technologies. Manage real-time and batch data pipelines using tools like Qlik Replicate and Kafka . Architect and maintain workflows using a range of AWS services, such as: · Kinesis, Lambda, Glue, S3, Step Functions, SNS, SQS, EventBridge, EC2, CloudFormation, and API Gateway . Establish robust observability using tools like New Relic for database monitoring and performance Required Skills and Qualifications: · 8+ years of professional experience in database administration and data engineering. · Extensive hands-on experience with SQL and MySQL , and managing AWS Redshift in production environments. · Strong command of Terraform for infrastructure automation and provisioning. · Proficiency in PowerShell and Python for scripting and automation. · Solid experience with Node.js or a similar programming language for integration. · Working knowledge of Neo4j , DynamoDB , and SQL Server . · Experience with Qlik Replicate and Kafka for data replication and streaming. · Deep understanding of cloud architecture, event-driven systems, and serverless AWS environments. · Proficiency with monitoring and observability tools such as New Relic . · Familiarity with Okta for identity and access management. · Excellent problem-solving and communication skills; ability to lead initiatives and mentor junior team members. Education: · Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field is required. Job Type: Full-time Pay: Up to ₹2,000,000.00 per year Schedule: Day shift Application Question(s): What is your expected CTC? Do you have 6+years of Hands - on experience with SQL, MySQL, AWS? Work Location: In person
Posted 3 weeks ago
1.0 - 3.0 years
8 - 9 Lacs
Hyderābād
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Job Description Summary The Software Engineer II, Aera DI role is accountable for developing data solutions and operations support of the Enterprise data lake. The role will be accountable for developing the pipelines for the data enablement projects, production/application support and enhancements, and support data operations activities. Additional responsibilities include data analysis, data operations process and tools, data cataloguing, and developing data SME skills in Global Product Development and Supply - Data and Analytics Enablement organization. Key Responsibilities: Responsible for delivering high quality, data products and analytic ready data solutions Develop and maintain data models to support our reporting and analysis needs. Develop ad-hoc analytic solutions from solution design to testing, deployment, and full lifecycle management. Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements Ensure data quality and integrity through data validation and testing Proficient Python/node.js along with UI technologies like Reacts.js, Spark, SQL, AWS Redshift, AWS S3, Glue/Glue Studio, Athena, IAM, other Native AWS Service familiarity with Domino/data lake principles. Required: 1-3 years of experience in information technology field in developing Software Applications. Working Experience with Aera Decision Intelligence is preferred. Good understanding of cloud technologies preferably AWS and related services in delivering and supporting data and analytics solutions/data lakes Proficient in Java/ReactJS/NodeJS, SQL, Python, Spark, SQL. Java knowledge can be an added advantage to explore java based DI tools. Ideal Candidates Would Also Have: Prior experience in global life sciences especially in the GPS functional area will be a plus Experience working internationally with a globally dispersed team including diverse stakeholders and management of offshore technical development team(s) Strong communication and presentation skills Other Qualifications: Bachelor's degree in Computer Science, Information Systems, Computer Engineering or equivalent is preferred If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 3 weeks ago
2.0 years
1 - 10 Lacs
Hyderābād
On-site
You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Software Engineer II at JPMorganChase within the Consumer and Community Banking, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job responsibilities: Work with Cloud Architect to identify data components and process flows Design and Develop data ingestion processes into Hadoop/AWS Platform Collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy Identify, analyze, and interpret trends or patterns in complex data sets Innovate new ways of managing, transforming and validating data Establish and enforce guidelines to ensure consistency, quality and completeness of data assets Apply quality assurance best practices to all work products Required qualifications, capabilities, and skills : Formal training or certification on software engineering concepts and 2+ years applied experience Experience in a Big Data technologies (Spark, Glue, Hive, Redshift, Kafka, etc.) Experience programming in Python/JAVA Experience performing data analysis (NOT DATA SCIENCE) on AWS platforms Experience with data management process on AWS is a huge Plus Experience in implementing complex ETL transformations on big data platform like NoSQL databases (Mongo, DynamoDB, Cassandra) Familiarity with relational database environment (Oracle, Teradata, etc.) leveraging databases, tables/views, stored procedures, agent jobs, etc. Strong development discipline and adherence to best practices and standards Demonstrated independent problem solving skills and ability to develop solutions to complex analytical/data-driven problems Experience of working in a development teams using agile techniques
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France