Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
As a Principal Architect, you will work to solve some of the most complex and captivating data management problems that would enable them as a data-driven organization; Seamlessly switch between roles of an Individual Contributor, team member, and an Architect as demanded by each project to define, design, and deliver actionable insights. On a typical day, you might Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation – data lake – data models – data warehouse etc. Lead cross-functional teams, define data strategies, and leverage the latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. What do we expect? Skills that we’d love! Minimum 15 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) • Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Show more Show less
Posted 1 month ago
20.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Over the past 20 years Amazon has earned the trust of over 300 million customers worldwide by providing unprecedented convenience, selection and value on Amazon.com. By deploying Amazon Pay’s products and services, merchants make it easy for these millions of customers to safely purchase from their third party sites using the information already stored in their Amazon account. In this role, you will lead Data Engineering efforts to drive automation for Amazon Pay organization. You will be part of the data engineering team that will envision, build and deliver high-performance, and fault-tolerant data pipeliens. As a Data Engineer, you will be working with cross-functional partners from Science, Product, SDEs, Operations and leadership to translate raw data into actionable insights for stakeholders, empowering them to make data-driven decisions. Key job responsibilities Design, implement, and support a platform providing ad-hoc access to large data sets Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift, and OLAP technologies Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2986853 Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Integration Development: Design and implement integration solutions using MuleSoft Anypoint Platform for various enterprise applications, including ERP, CRM, and third-party systems. API Management: Develop and manage APIs using MuleSofts API Gateway, ensuring best practices for API design, security, and monitoring. MuleSoft Anypoint Studio: Develop, deploy, and monitor MuleSoft applications using Anypoint Studio and Anypoint Management Console. Data Transformation: Use MuleSofts DataWeave to transform data between various formats (XML, JSON, CSV, etc.) as part of integration solutions. Troubleshooting and Debugging: Provide support in troubleshooting and resolving integration issues and ensure the solutions are robust and scalable. Collaboration: Work closely with other developers, business analysts, and stakeholders to gather requirements, design, and implement integration solutions. Documentation: Create and maintain technical documentation for the integration solutions, including API specifications, integration architecture, and deployment processes. Best Practices: Ensure that the integrations follow industry best practices and MuleSofts guidelines for designing and implementing scalable and secure solutions. Required Qualifications Bachelor degree in computer science, Information Technology, or a related field. 3+ years of experience in MuleSoft development and integration projects. Proficiency in MuleSoft Anypoint Platform, including Anypoint Studio, Anypoint Exchange, and Anypoint Management Console. Strong knowledge of API design and management, including REST, SOAP, and Web Services. Proficiency in DataWeave for data transformation. Hands-on experience with integration patterns and technologies such as JMS, HTTP/HTTPS, File, Database, and Cloud integrations. Experience with CI/CD pipelines and deployment tools such as Jenkins, Git, and Maven. Good understanding of cloud platforms (AWS, Azure, or GCP) and how MuleSoft integrates with cloud services. Excellent troubleshooting and problem-solving skills. Strong communication skills and the ability to work effectively in a team environment.Strong working knowledge of modern programming languages, ETL/Data Integration tools (preferably SnapLogic) and understanding of Cloud Concepts. SSL/TLS, SQL, REST, JDBC, JavaScript, JSON Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Good to have the ability to build complex mappings with JSON path expressions, and Python scripting. Good to have experience in ground plex and cloud plex integrations. Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Should be able to deliver the project by leading a team of the 6-8member team. Should have had experience in integration projects with heterogeneous landscapes. Good to have the ability to build complex mappings with JSON path expressions, flat files and cloud. Good to have experience in ground plex and cloud plex integrations. Experience in one or more RDBMS (Oracle, DB2, and SQL Server, PostgreSQL and RedShift) Real-time experience working in OLAP & OLTP database models (Dimensional models). Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job description: Our engineering team is growing and we are looking to bring on board a Python Developer who can help us transition to the next phase of the company. You will be pivotal in refining our system architecture, ensuring the various tech stacks play well with each other, and smoothening the DevOps process. A must have well verse understanding of software paradigm, and curiosity to carve out designs of varying ML, MLOps, and LLMOps problem statements. You will determine to lead your team into right direction towards very end of implementation for underlined project. By joining our team, you will get exposure to working across a swath of modern technologies while building an enterprise-grade ML platform in the most promising area. Responsibilities Be the bridge between engineering and product teams. Understand long-term product roadmap and architect a system design that will scale with our plans. Take ownership of converting product insights into detailed engineering requirements. Work break-down among team, and orchestrating the development of components for each sprint. Very well verse with solution designing, and documentation (HLD/LLD). Developing "Zero Defect Software" with extreme efficiency by utilizing modern cutting-edge tools (ChatGPT, Co-pilot etc). Adapt, and impart the mindset to build a unit of software that is secured, instrumented, and resilient. Author high-quality, highly-performance, and unit-tested code running on a distributed environment using containers. Continually evaluate and improve DevOps processes for a cloud-native codebase. Strong design skills in defining API Data Contracts / OOAD / Microservices / Data Models and Concurrency concepts. An ardent leader with an obsession for quality, refinement, innovation, and empowering leadership. Qualifications Work Experience 5-7 years of experience with hands-on experience with development of full fledge Systems/Micro-services using Python 3+ years experience having Senior engineering responsibilities. 3+ years of people mentorship/leadership experience — managing engineers preferably with good exposure in leading multiple development teams. 3+ years of experience in object-oriented design, and agile development methodologies. Basic experience in developing/deploying cloud-native software using GCP / AWS / Azure. Proven track record building large-scale Product grade (high-throughput, low-latency, and scalable) systems. A well-versed understanding and designing skills of SQL/NoSQL/OLAP DBs. Up-to date with modern cutting-edge technologies to boost efficiency and delivery of team. (Bonus: To have an understanding of Generative AI frameworks/Libraries such RAG, Langchain, LLAMAindex etc.) Skills Strong documentation skills. As a team, we heavily rely on elaborate documentation for everything we are working on. Ability to take authoritative decision, and hold accountability. Ability to motivate, lead, and empower others. Strong independent contributor as well as a team player. Working knowledge of ML and familiarity with concepts of MLOps You will excel in this role if You have a product mindset. You understand, care about, and can relate to our customers. You take ownership, collaborate, and follow through to the very end. You love solving difficult problems, stand your ground, and get what you want from engineers. Resonate with our core values of innovation, curiosity, accountability, trust, fun, and social good. Show more Show less
Posted 1 month ago
5.0 - 10.0 years
10 - 18 Lacs
Pune
Hybrid
Design & develop full-stack Java applications using Spring ecosystem Build RESTful Web Services&Microservices architecture Develop responsive UIs using HTML5, React/Angular, JavaScript, CSS Implement integration solutions using REST APIs and Kafka/MQ
Posted 1 month ago
7.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Location: Bengaluru, Karnataka, India Job ID: R0096836 Date Posted: 2025-06-11 Company Name: HITACHI ENERGY TECHNOLOGY SERVICES PRIVATE LIMITED Profession (Job Category): Quality Management Job Schedule: Full time Remote: No Job Description: The opportunity: We are seeking a highly skilled and experienced Analytics Specialist to design, develop, and deliver robust data-driven solutions using Power BI, Power Apps, and related Microsoft technologies. The ideal candidate will have strong analytical skills, hands-on experience in AI projects, and a deep understanding of business intelligence tools and data modeling. How you’ll make an impact: Design and develop Power BI reports, dashboards, and data models to meet business requirements. Manage the PBI/Power apps/Ai projects independently and work with global stakeholders. Administer Power BI service and integrate reports with other business applications. Create and manage OLAP cubes and tabular models compatible with data warehouse standards. Perform advanced DAX calculations and build efficient data models. Ensure security compliance through implementation of row-level security and access controls. Collaborate with cross-functional teams to understand reporting needs and deliver actionable insights. Maintain documentation and provide knowledge transfer to stakeholders. Contribute to AI-based analytics projects and drive automation using APIs and embedded analytics. Manage and deliver Q&O monthly performance reports with high accuracy and timeliness. Continuously validate, automate, and improve reporting quality to ensure data integrity and actionable insights. Managing multiple stakeholders across functions and business lines, requiring strong influence skills. Leading projects independently with limited supervision; strong ownership and accountability needed. Integrating data from multiple systems and maintaining reporting consistency. Communicating insights effectively to senior leaders and diverse teams; ability to simplify complex data. Driving and managing analytics/reporting projects end-to-end, including scope, timelines, delivery, and stakeholder engagement. Capture business requirements and transform them into efficient Power BI dashboards, KPI scorecards, and reports. Build and maintain Analysis Services reporting models and develop scalable data models aligned with BI best practices. Interact with BU teams to identify improvement opportunities and implement enhancement strategies. Seek user feedback for enhancements and remain updated with trends in performance and analytics. Responsible to ensure compliance with applicable external and internal regulations, procedures, and guidelines. Living Hitachi Energy’s core values of safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Your background: Graduate/Postgraduate in Engineering, Finance, Business Management, Data Science, Statistics, Mathematics, or similar quantitative field. Minimum 7 years of experience. Power BI (development, DAX, publishing, and scheduling). Hands on experience in Power Apps, SQL Data Warehouse, SSAS, OLAP CUBE, Microsoft Azure, Visual Studio. Exposure to AI and automation projects. Microsoft DA-100 certification preferred. Proficiency in both spoken & written English language is required. Qualified individuals with a disability may request a reasonable accommodation if you are unable or limited in your ability to use or access the Hitachi Energy career site as a result of your disability. You may request reasonable accommodations by completing a general inquiry form on our website. Please include your contact information and specific details about your required accommodation to support you during the job application process. This is solely for job seekers with disabilities requiring accessibility assistance or an accommodation in the job application process. Messages left for other purposes will not receive a response.
Posted 1 month ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Us Lemma Technologies is a software start-up company based in Baner Pune. We are unleashing the power of programmatic AdTech to the DOOH ( Digital out of home ) world. Our Mission is to Transform Digital Out Of Home media to connect Brands with their Consumer by Establishing Authentic and Transparent Standards. Innovation is our DNA and Transparency is our RNA We are Revolutionising the DOOH industry. As an organisation, we successfully deliver brand stories seamlessly across all large format digital screens from DOOH to CTV and even on mobile and desktop devices. We are focussed on connecting DOOH media to mainstream digital, enabling brands to deploy omni-digital strategies through our platform. Roles & Responsibilities Chief Data Scientist /Architect of Lemma Technologies. This role will be responsible to define and execute the technical strategy for adoption of modern AI / ML practices to acquire, process data and provide actional insights to Lemma customers. Good understanding of the entire journey of Data acquisition, Data warehouse, Information Architecture, Dashboard, Reports, Predictive Insights, Adoption of AI / ML and NLP and provide innovative data oriented insights for Lemma customers Deep understanding of Data science and Technology and can recommend adoption of right technical tools and strategies. Expected to be hands on technical expert who will build and guide a technical data team Build, design and implement our highly scalable, fault-tolerant, highly available big data platform to process terabytes of data and provid customers with in-depth analytics. Deep data science and AI/ML hands-on experience to give actionable insights to advertisers/ customers of Lemma Good overview of modern technology stack such as Spark, Hadoop, Kafka, HBase, Hive, Presto etc. Automate high-volume data collection and processing to provide real time data analytics. Customize Lemmas reporting and analytics platform based on customers requirements from customers and deliver scalable, production-ready solutions. Lead multiple projects to develop features for data processing and reporting platform, collaborate with product managers, cross-functional teams, other stakeholders and ensure successful delivery of projects. Leveraging a broad range of Lemmas data architecture strategies and proposing both data flows and storage solutions. Managing Hadoop map reduce and spark jobs & solving any ongoing issues with operating the cluster. Working closely with cross functional teams on improving availability and scalability of large data platform and functionality of Lemma software. Participate in Agile/Scrum processes such as sprint planning, sprint retrospective, backlog grooming, user story management, work item prioritization, etc.. Skills Required 10 to 12+ years of proven experience in designing, implementing, and delivering complex, scalable, and resilient platform and services. Experience in building AI, machine learning, Data Analytics Experience in OLAP (Snowflake, Vertica or similar) would be an added advantage. Ability to understand vague business problems and convert into working solutions. Excellent spoken and written interpersonal skills with a collaborative approach. Dedication to developing high-quality software and products. Curiosity to explore and understand data is a strong plus Deep understanding of Big-Data and distributed systems (MapReduce, Spark, Hive, Kafka, Oozie, Airflow) (ref:hirist.tech) Show more Show less
Posted 1 month ago
2.0 - 5.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Req ID: 310007 We are currently seeking a Digital Engineering Staff Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Data Modeler Position Overview: The Data Modeler will be responsible for designing and implementing data models that support the organization's data management and analytics needs. This role involves collaborating with various stakeholders to understand data sources, relationships, and business requirements, and translating them into effective data structures. Key Responsibilities: Collaborate with Business Analysts: Understand different data sources and their relationships. Prepare Conformed Dimension Matrix: Identify different grains of facts, finalize dimensions, and harmonize data across sources. Create Data Models: Develop Source to Target Mapping (STMs) documentation and custom mappings (both technical and non-technical). Include Transformation Rules: Ensure STMs include pseudo SQL queries for transformation rules. Coordinate Reviews: Work with Data Architects, Product Owners, and Enablement teams to review and approve models, STMs, and custom mappings. Engage with Data Engineers: Clarify any questions related to STMs and custom mappings. Required Technical Skills: Proficiency in SQL: Strong understanding of SQL and database management systems. Data Modeling Tools: Familiarity with tools such as ERwin, IBM InfoSphere Data Architect, or similar. Data Warehousing Concepts: Solid knowledge of data warehousing principles, ETL processes, and OLAP. Data Governance and Compliance: Understanding of data governance frameworks and compliance requirements. Key Competencies: Analytical Skills: Ability to analyze complex data sets and derive meaningful insights. Attention to Detail: Ensure accuracy and consistency in data models. Communication Skills: Effectively collaborate with stakeholders and articulate technical concepts to non-technical team members. Project Management Skills: Ability to prioritize tasks, manage timelines, and coordinate with cross-functional teams. Continuous Learning and Adaptability: Commitment to ongoing professional development and adaptability to changing business needs and technologies. Additional : Problem-Solving Abilities: Innovative solutions to data integration, quality, and performance challenges. Knowledge of Data Modeling Methodologies: Entity-relationship modeling, dimensional modeling, normalization techniques. Familiarity with Business Intelligence Tools: Enhance ability to design data structures that facilitate data analysis and visualization. Preferred Qualifications: Experience in SDLC: Understanding of all phases of the Software Development Life Cycle. Certifications: Relevant certifications in data modeling, data warehousing, or related fields.
Posted 1 month ago
0 years
0 Lacs
Warangal Rural, Telangana, India
On-site
GETEC ist einer der führenden Energieversorger und Contracting-Spezialisten für Industrie und Immobilienwirtschaft in Europa. Unser Werteversprechen "Wir haben die Energie für mehr" ist Leitbild für über 2.400 Mitarbeiter, die mit exzellentem Engineering-Know-how, herausragender regulatorischer Kompetenz, ausgewiesener Handlungsschnelligkeit und umfassender Nachhaltigkeitsexpertise unsere Kunden durch eine immer komplexer werdende Energiewelt navigieren. Deine zukünftige Aufgabe In Deiner Rolle als Reporting & Analytics Specialist spielst Du eine zentrale Rolle bei der Weiterentwicklung unserer datenbasierten Steuerung. Du verbesserst bestehende Reportingstrukturen, treibst die Automatisierung von Datenprozessen voran und schaffst die Grundlage für faktenbasierte Entscheidungen im Finanzbereich. Du entwickelst und pflegst interaktive Reports und Dashboards für die Führungsgremien sowie die Finanz- und Controlling-Anforderungen – mit Power BI und weiteren BI-Tools. Du konzipierst verlässliche Datenmodelle und sorgst dafür, dass die relevanten Daten für Reporting, Planung und Forecasting jederzeit verfügbar, konsistent und nachvollziehbar sind. Du arbeitest mit Finanz- und Transaktionsdaten aus verschiedenen Quellsystemen (z. B. SAP, CRM, ERP) und unterstützt die Qualität und Transparenz unserer Datenbasis. Du bringst Dich in die Weiterentwicklung unserer Dateninfrastruktur ein – inklusive Stammdatenlogik, Datenqualitätsprozesse und Systemanpassungen im Controllingkontext. Du arbeitest eng mit Controlling, Accounting und anderen Fachbereichen zusammen, um datengetriebene Lösungen passgenau auf die Anforderungen des Unternehmens abzustimmen. Du als Mensch und was Du mitbringst Ein gutes Gespür für finanzielle Zusammenhänge, analytisches Denken und Freude daran, aus Daten steuerungsrelevante Erkenntnisse abzuleiten Erfahrung im Finanz- oder Controllingumfeld – idealerweise im Reporting, in der Planung oder in der Arbeit mit BI-Tools Ausgeprägtes Interesse an Datenstrukturen und -modellen sowie die Bereitschaft, sich in Tools wie SQL, Power BI oder relationale Datenbanken einzuarbeiten und diese zur Optimierung von Finanzprozessen zu nutzen Motivation, sich mit ETL-Prozessen und Datenflüssen auseinanderzusetzen, um ein besseres Verständnis für die Herkunft und Verarbeitung von Finanzdaten zu entwickeln Affinität zu technologischen Entwicklungen im BI-Umfeld, idealerweise erste Berührungspunkte mit Microsoft Fabric oder OLAP-Technologien (z. B. Jedox, Lucanet) – oder die Offenheit, sich in diese Themen einzuarbeiten Eine strukturierte, lösungsorientierte Arbeitsweise und Freude an bereichsübergreifender Zusammenarbeit Wir bieten MEHR Du hast bei uns die Möglichkeit, die Energiewende aktiv mitzugestalten. Ab Tag Eins bist Du Teil des Teams und hast die Chance direkt Verantwortung zu übernehmen. Wir ermöglichen Dir eine gute Work-Life-Balance durch eine flexible Arbeitszeitplanung mit Gleitzeit sowie 30 Urlaubstagen im Jahr. Wir bieten unseren Mitarbeitern das Jobrad-Konzept: 365 Tage im Jahr fit halten und zusätzlich die Umwelt schonen. Freue Dich auf die Nutzung zahlreicher Fitness- und Wellness-Angebote bei einer Mitgliedschaft über unseren Kooperationspartner Hansefit. Du möchtest Dich weiterentwickeln? Nutze deinen kostenlosen Zugang zu LinkedIn Learning und bekomme Zugriff auf unzählige Trainings- und Lernvideos. Zudem bieten wir eine Vielzahl von internen und externen Weiterbildungs- und Coaching-Angeboten an. Die Atmosphäre ist Dir wichtig? – Agiles Arbeiten und moderne Bürokonzepte sind für uns eine Selbstverständlichkeit. Durch unser Corporate Benefit Portal erhältst Du Zugang zu verschiedenen Vergünstigungen bei über 800 namhaften Produkt- und Eventanbietern. Wir laden alle Menschen herzlich dazu ein, Teil der GETEC-Familie zu werden und unsere offene und wertschätzende Unternehmenskultur zu erleben. Wir sind überzeugt, dass jeder Einzelne von uns einen Beitrag zur Energiewende leisten kann. Es spielt dabei keine Rolle, woher Du kommst, welches Geschlecht oder welche sexuelle Orientierung Du hast, welchem Glauben Du angehörst oder ob Du Einschränkungen hast. Das GETEC Recruiting Team freut sich auf Deine Kontaktaufnahme und Deine Bewerbung: MAKING A DIFFERENCE FOR GENERATIONS TO COME. Show more Show less
Posted 1 month ago
8.0 - 10.0 years
15 - 22 Lacs
Nagpur, Pune
Work from Office
Design, develop, and deploy Power BI solutions; build tabular and multidimensional models; create KPIs; and optimize ETL processes. Write complex DAX/MDX queries. Collaborate with teams, analyze KPIs, and present insights.
Posted 1 month ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Career Area: Technology, Digital and Data Job Description: Your Work Shapes the World at Caterpillar Inc. When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other. We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it. As a Software Engineer you will contribute to development and deployment of Caterpillar’s state-of-the-art digital platform. Competent to perform all programming and development assignments without close supervision; normally assigned the more complex aspects of systems work. Works directly on complex application/technical problem identification and resolution, including responding to off-shift and weekend support calls. Works independently on complex systems or infrastructure components that may be used by one or more applications or systems. Drives application development focused around delivering business valuable features Maintains high standards of software quality within the team by establishing good practices and habits. Identifies and encourage areas for growth and improvement within the team. Mentors junior developers. Communicate with end users and internal customers to help direct development, debugging, and testing of application software for accuracy, integrity, interoperability, and completeness. Performs integrated testing and customer acceptance testing of components that requires careful planning and execution to ensure timely, quality results. The position manages the completion of its own work assignments and coordinates work with others. Based on past experiences and knowledge, the incumbent normally works independently with minimal management input and review of end results. Typical customers include Caterpillar customers, dealers, other external companies who purchase services offered by Caterpillar as well as internal business unit and/or service center groups. The position is challenged to quickly and correctly identify problems that may not be obvious. The incumbent solves problems by determining the best course of action, within departmental guidelines, from many existing solutions. The incumbent sets priorities and establishes a work plan in order to complete broadly defined assignments and achieve desired results. The position participates in brainstorming sessions focused on developing new approaches to meeting quality goals in the measure(s) stated. Job Description: Candidate should have at least 5+ years of experience as a Snowflake SQL developer. Write complex SQL queries to implement ETL(Extract, Transform, Load) processes and for Business Intelligence reporting. Deliver robust solutions through Query optimization ensuring Data Quality. Should have experience in writing Functions and Stored Procedures. Analyze & translate functional specifications /user stories into technical specifications. Good to have experience in Design/ Development in any ETL tool like DataStage and Snaplogic. Problem solving Skills Should communicate with business partners/client. Experience on developing both Paraller and sequencer jobs Strong experience in design & implementation of Data warehousing application processes using ETL tool Experience in SQL & UNIX scripting Experience with Data warehousing concepts Team Player with proven abilities in guiding team members and enabling knowledge sharing among the team Strong problem solving & technical skills coupled with confident decision making for enabling effective solutions leading to high customer satisfaction Strong understanding of the principles of Data Warehouse using Fact Tables, Dimension Tables, star and snowflake schema modeling Experience creating process using various operational sources like Snowflake, Oracle, SQLServer, Flat Files, Excel Files, into a staging area Expertise to use Data Stage Designer to develop processes for extracting, transforming, integrating, and loading data into data warehouse system (OLAP) Experience in integration of various data sources (DB2-UDB, SQL Server, Oracle and Flat files) into data staging area High experience creating tables and databases in Snowflake and SQL Server. Experience in developing Data stage mappings using transformations like Transformation, Lookup, Join, Merge, Filter, Funnel, Aggregator, Sort, Oracle connector etc. And Sequence Jobs. Prepare Technical Design Document based on Data Model understanding and S2T mapping requirement. Performance, defect and dependency analysis and Performance tuning in Data Stage jobs and SQL queries. Excellent skills in problem-solving and trouble-shooting capabilities, Quick Learner, highly motivated, result oriented and good team player Posting Dates: June 10, 2025 - June 16, 2025 Caterpillar is an Equal Opportunity Employer. Not ready to apply? Join our Talent Community. Show more Show less
Posted 1 month ago
2.0 years
4 Lacs
India
On-site
Responsibilities As a Fullstack (React and Python) Developer, you will be part of the team consisting of AI/ML Engineers, UI/UX Engineers and GIS Engineers to build end-to-end AI based Analytics Software. You will be responsible for - Designing, developing, testing, deploying, managing & maintaining the backend and frontend for various Modules of the project. Work closely with the machine learning, Image processing and GIS team to integrate the algorithmic output from the backend REST APIs. Participate in UAT, and diagnose & troubleshoot, bugs and application integration issues. Participate in the entire software development lifecycle, from concept to delivery. Write clean, well-documented, and efficient code following best practices and coding standards. Perform code reviews and provide constructive feedback to team members. Create and maintain documentation related to the developed processes and applications. Qualification & Experience - Bachelor's degree in Computer Science, Information Technology, or a related field. - 2-5 years of demonstrable experience designing, building, and working as a Fullstack Engineer for enterprise web applications Ideally, this would include the following: Expert-level proficiency with Python (3.4+), Django (2.1+). Expert-level proficiency with JavaScript (ES6), HTML5 & CSS Expert-level proficiency with ReactJS - Familiarity with common databases (NoSQL such as MongoDB) and data warehousing concepts (OLAP, OLTP) - Understanding of REST concepts and building/interacting with REST APIs - Deep understanding of a few UI concepts: Cross-browser compatibility and implementing responsive web design Hands-on experience with test driven development, using testing libraries like Jest, PyTest and Nose Familiarity with common JS visualization libraries built using D3, Chart.js, Highcharts, etc. Deep understanding of core backend concepts: Develop and design RESTful services and APIs Develop functional databases, applications, and servers to support websites on the back end Performance optimization and multithreading concepts Experience with deploying and maintaining high traffic infrastructure (performance testing is a plus) Experience with containerization tools (e.g., Docker, Kubernetes) is a plus. - Understanding of DevOps practices and continuous integration/continuous deployment (CI/CD) pipelines is a plus. - Familiarity with Agile/Scrum methodologies is a plus. In addition, the ideal candidate would have great problem-solving skills, and familiarity with code versioning tools such as Github. Job Type: Full-time Pay: From ₹411,871.11 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Madhapur, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Do you have experience with containerization tools (e.g., Docker, Kubernetes) ? Do you have understanding of DevOps practices and continuous integration/continuous deployment (CI/CD) pipelines ? Do you have familiarity with Agile/Scrum methodologies ? Experience: 5years: 2 years (Required) Full-stack development: 2 years (Required) Location: Madhapur, Hyderabad, Telangana (Required) Work Location: In person
Posted 1 month ago
5.0 - 10.0 years
0 Lacs
Noida
On-site
Noida, Uttar Pradesh, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Hyderabad, Telangana, India;Gurgaon, Haryana, India Qualification : Required Proven hands-on experience on designing, developing and supporting Database projects for analysis in a demanding environment. Proficient in database design techniques – relational and dimension designs Experience and a strong understanding of business analysis techniques used. High proficiency in the use of SQL or MDX queries. Ability to manage multiple maintenance, enhancement and project related tasks. Ability to work independently on multiple assignments and to work collaboratively within a team is required. Strong communication skills with both internal team members and external business stakeholders Added Advanatage Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake will be an added advantage. Experience of working on Linux system Experience of Tableau or Micro strategy or Power BI or any BI tools will be an added advantage. Expertise of programming in Python, Java or Shell Script would be a plus Role : Roles & Responsibilities Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for customers regarding technical issues during the project. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience : 5 to 10 years Job Reference Number : 11078
Posted 1 month ago
3.0 - 6.0 years
6 - 10 Lacs
Noida
On-site
Noida/ Indore/ Bangalore;Bangalore, Karnataka, India;Indore, Madhya Pradesh, India;Gurugram, Haryana, India Qualification : OLAP, Data Engineering, Data warehousing, ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake Experience in writing and troubleshooting SQL programming or MDX queries Experience of working on Linux Experience in Microsoft Analysis services (SSAS) or OLAP tools Tableau or Micro strategy or any BI tools Expertise of programming in Python, Java or Shell Script would be a plus Skills Required : OLPA, MDX, SQL Role : Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for prospects regarding technical issues during POV stage. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience : 3 to 6 years Job Reference Number : 10350
Posted 1 month ago
3.0 - 6.0 years
0 Lacs
Indore
On-site
Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Noida, Uttar Pradesh, India Qualification : Pre-Sales Solution Engineer - India Experience areas or Skills : Pre-Sales experience of Software or analytics products Excellent verbal & written communication skills OLAP tools or Microsoft Analysis services (MSAS) Data engineering or Data warehouse or ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Tableau or Micro strategy or any BI tool Hive QL or Spark SQL or PLSQL or TSQL Writing and troubleshooting SQL programming or MDX queries Working on Linux programming in Python, Java or Java Script would be a plus Filling RFP or Questioner from Customer NDA, Success Criteria, Project closure and other Documentation Be willing to travel or relocate as per requirement Role : Acts as main point of contact for Customer contacts involved in the evaluation process Product demonstrations to qualified leads Product demonstrations in support of marketing activity such as events or webinars Own RFP, NDA, PoC success criteria document, POC Closure and other documents Secures alignment on Process and documents with the customer / prospect Owns the technical win phases of all active opportunities Understand Customer domain and database schema Providing OLAP and Reporting solution Work closely with customers for understanding and resolving environment or OLAP cube or reporting related issues Co-ordinate with solutioning team for execution of PoC as per success plan Creates enhancement requests or identify requests for new features on behalf of customers or hot prospects Experience : 3 to 6 years Job Reference Number : 10771
Posted 1 month ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About AutoZone: AutoZone is the nation's leading retailer and a leading distributor of automotive replacement parts and accessories with more than 6,000 stores in US, Puerto Rico, Mexico, and Brazil. Each store carries an extensive line for cars, sport utility vehicles, vans and light trucks, including new and remanufactured hard parts, maintenance items and accessories. We also sell automotive diagnostic and repair software through ALLDATA, diagnostic and repair information through ALLDATAdiy.com, automotive accessories through AutoAnything.com and auto and light truck parts and accessories through AutoZone.com. Since opening its first store in Forrest City, Ark. on July 4, 1979, the company has joined the New York Stock Exchange (NYSE: AZO) and earned a spot in the Fortune 500. AutoZone has been committed to providing the best parts, prices, and customer service in the automotive aftermarket industry. We have a rich culture and history of going the Extra Mile for our customers and our community. At AutoZone you’re not just doing a job; you’re playing a crucial role in creating a better experience for our customers, while creating opportunities to DRIVE YOUR CAREER almost anywhere! We are looking for talented people who are customer focused, enjoy helping others and have the DRIVE to excel in a fast-paced environment! Position Summary The Systems Engineer will design data model solutions and ensure alignment between business and IT strategies, operating models, guiding principles, and software development with a focus on the information layer. The Systems Engineer works across business lines and IT domains to ensure that information is viewed as a corporate asset. This includes its proper data definition, creation, usage, archival, and governance. The Systems Engineer works with other engineers and Data Architects to design overall solutions in accordance with industry best practices, principles and standards. The Systems Engineer strives to create and improve the quality of systems, provide more flexible solutions, and reduce time-to-market. Key Responsibilities Enhance and maintain the AutoZone information strategy. Ensure alignment of programs and projects with the strategic AZ Information Roadmap and related strategies Perform gap analysis between current data structures and target data structures. Enhance and maintain the Enterprise Information Model Work with service architects and application architects to assist with the creation of proper data access and utilization methods. Gather complex business requirements and translate product and project needs into data models supporting long-term solutions. Serve as a technical data strategy expert and lead the creation of technical requirements and design deliverables. Define and communicate data standards, industry best practices, technologies, and architectures. Check conformance to standards and resolve any conflicts by explaining and justifying architectural decisions. Recommend and evaluate new tools and methodologies as needed. Manage, communicate, and improve the data governance framework. Requirements: A systems thinker, able to move fluidly between high-level abstract thinking and detail-oriented implementation, open minded to new ideas, approaches, and technologies A data and fact-driven decision maker, with an ability to make quick decisions under uncertainty when necessary; able to quickly learn new technologies, tools, and organizational structures/strategies Understanding of current industry standard best practices regarding integration, architecture, tools, and processes A self-starter that is naturally inquisitive, requiring only small pieces to the puzzle, across many technologies – new and legacy Excellent written and verbal communication, presentation, and analytical skills, including the ability to effectively communicate complex technical concepts and designs to a broad range of people. Education and/or Experience Bachelor's degree in MIS, Computer Science or similar degree or experience required Minimum 3+ yrs experience and knowledge of database systems such as Oracle, Postgres, UDB/DB2, BigQuery, Spanner, JSON, and Couchbase Minimum 2 years of experience with data requirements gathering, acquisition of data from difference business systems, ingestion of data in GCP using managed services namely BigQuery, DataFlow, Composer, Pub/Sub and other ingestion technologies, curation of the data using DBT or other similar technologies and creating data marts/wide tables for analysis and reporting consumption. Assembling large, complex sets of data that meet non-functional and functional business requirements Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using GCP and SQL technologies Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition. Working with stakeholders including data, design, product, and executive teams and assisting them with data-related technical issues Working with stakeholders including the Executive, Product, Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues. Relational & NoSQL database design capability across OLTP & OLAP Excellent analytical and problem-solving skills Excellent verbal and written communication skills Ability to facilitate modeling sessions and communicate appropriately with IT and business customers Experience with Agile software development methodologies Experience with large-replicated databases across distributed and cloud data centers Our Values: An AutoZoner Always..... PUTS CUSTOMERS FIRST CARES ABOUT PEOPLE STRIVES FOR EXCEPTIONAL PERFORMANCE ENERGIZES OTHERS EMBRACES DIVERSITY HELPS TEAMS SUCCEED Show more Show less
Posted 1 month ago
0.0 - 2.0 years
0 Lacs
Madhapur, Hyderabad, Telangana
On-site
Responsibilities As a Fullstack (React and Python) Developer, you will be part of the team consisting of AI/ML Engineers, UI/UX Engineers and GIS Engineers to build end-to-end AI based Analytics Software. You will be responsible for - Designing, developing, testing, deploying, managing & maintaining the backend and frontend for various Modules of the project. Work closely with the machine learning, Image processing and GIS team to integrate the algorithmic output from the backend REST APIs. Participate in UAT, and diagnose & troubleshoot, bugs and application integration issues. Participate in the entire software development lifecycle, from concept to delivery. Write clean, well-documented, and efficient code following best practices and coding standards. Perform code reviews and provide constructive feedback to team members. Create and maintain documentation related to the developed processes and applications. Qualification & Experience - Bachelor's degree in Computer Science, Information Technology, or a related field. - 2-5 years of demonstrable experience designing, building, and working as a Fullstack Engineer for enterprise web applications Ideally, this would include the following: Expert-level proficiency with Python (3.4+), Django (2.1+). Expert-level proficiency with JavaScript (ES6), HTML5 & CSS Expert-level proficiency with ReactJS - Familiarity with common databases (NoSQL such as MongoDB) and data warehousing concepts (OLAP, OLTP) - Understanding of REST concepts and building/interacting with REST APIs - Deep understanding of a few UI concepts: Cross-browser compatibility and implementing responsive web design Hands-on experience with test driven development, using testing libraries like Jest, PyTest and Nose Familiarity with common JS visualization libraries built using D3, Chart.js, Highcharts, etc. Deep understanding of core backend concepts: Develop and design RESTful services and APIs Develop functional databases, applications, and servers to support websites on the back end Performance optimization and multithreading concepts Experience with deploying and maintaining high traffic infrastructure (performance testing is a plus) Experience with containerization tools (e.g., Docker, Kubernetes) is a plus. - Understanding of DevOps practices and continuous integration/continuous deployment (CI/CD) pipelines is a plus. - Familiarity with Agile/Scrum methodologies is a plus. In addition, the ideal candidate would have great problem-solving skills, and familiarity with code versioning tools such as Github. Job Type: Full-time Pay: From ₹411,871.11 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Madhapur, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Do you have experience with containerization tools (e.g., Docker, Kubernetes) ? Do you have understanding of DevOps practices and continuous integration/continuous deployment (CI/CD) pipelines ? Do you have familiarity with Agile/Scrum methodologies ? Experience: 5years: 2 years (Required) Full-stack development: 2 years (Required) Location: Madhapur, Hyderabad, Telangana (Required) Work Location: In person
Posted 1 month ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description SSIS ADF ETL Requirements Bachelor’s degree in computer science or related field, or equivalent work experience with 10+ years in SQL Servers and relational databases Strong expertise in data engineering, including querying and optimizing complex data sets, and experience with ETL tools like SSIS and ADF Proficiency in Azure cloud technologies, data visualization and reporting tools (e.g., SSRS, Power BI), and a solid understanding of OLAP structures and relational database design Excellent communication skills and ability to work independently or collaboratively in a team, with preferred experience in data warehousing, .NET, Azure, AWS, and agile environments Job responsibilities Bachelor’s degree in computer science or related field, or equivalent work experience with 6 years in SQL Servers and relational databases Strong expertise in data engineering, including querying and optimizing complex data sets, and experience with ETL tools like SSIS and ADF Proficiency in Azure cloud technologies, data visualization and reporting tools (e.g., SSRS, Power BI), and a solid understanding of OLAP structures and relational database design Excellent communication skills and ability to work independently or collaboratively in a team, with preferred experience in data warehousing, .NET, Azure, AWS, and agile environments What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 1 month ago
7.0 - 12.0 years
25 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role - Data Modeler/Senior Data Modeler Exp - 5 to 12 Yrs Locs - Hyderabad, Pune, Bengaluru Position - Permanent Must have skills: - Strong SQL - Strong Data Warehousing skills - ER/Relational/Dimensional Data Modeling - Data Vault Modeling - OLAP, OLTP - Schemas & Data Marts Good to have skills: - Data Vault - ERwin / ER Studio - Cloud Platforms (AWS or Azure)
Posted 1 month ago
2.0 - 5.0 years
12 - 13 Lacs
Noida
Work from Office
In India, we operate as Next Gear India Private Limited, a fully-owned subsidiary of Cotality with offices in Kolkata, West Bengal, and Noida, Uttar Pradesh. Next Gear India Private Limited plays a vital role in Cotalitys Product Development capabilities, focusing on creating and delivering innovative solutions for the Property & Casualty (P&C) Insurance and Property Restoration industries. While Next Gear India Private Limited operates under its own registered name in India, we are seamlessly integrated into the Cotality family, sharing the same commitment to innovation, quality, and client success. When you join Next Gear India Private Limited, you become part of the global Cotality team. Together, we shape the future of property insights and analytics, contributing to a smarter and more resilient property ecosystem through cutting-edge technology and insights. QA Automation Engineer As a QA Automation Engineer specializing in Data Warehousing, you will play a critical role in ensuring that our data solutions are of the highest quality. You will work closely with data engineers and analysts to develop, implement, and maintain automated testing frameworks for data validation, ETL processes, data quality, and integration. Your work will ensure that data is accurate, consistent, and performs optimally across our data warehouse systems. Responsibilities Develop and Implement Automation Frameworks : Design, build, and maintain scalable test automation frameworks tailored for data warehousing environments. Test Strategy and Execution : Define and execute automated test strategies for ETL processes, data pipelines, and database integration across a variety of data sources. Data Validation : Implement automated tests to validate data consistency, accuracy, completeness, and transformation logic. Performance Testing : Ensure that the data warehouse systems meet performance benchmarks through automation tools and load testing strategies. Collaborate with Teams : Work closely with data engineers, software developers, and data analysts to understand business requirements and design tests accordingly. Continuous Integration : Integrate automated tests into the CI/CD pipelines, ensuring that testing is part of the deployment process. Defect Tracking and Reporting : Use defect-tracking tools (e.g., JIRA) to log and track issues found during automated testing, ensuring that defects are resolved in a timely manner. Test Data Management : Develop strategies for handling large volumes of test data while maintaining data security and privacy. Tool and Technology Evaluation : Stay current with emerging trends in automation testing for data warehousing and recommend tools, frameworks, and best practices. Job Qualifications: Requirements and skills At Least 4+ Years Experience Solid understanding of data warehousing concepts (ETL, OLAP, data marts, data vault,star/snowflake schemas, etc.). Proven experience in building and maintaining automation frameworks using tools like Python, Java, or similar, with a focus on database and ETL testing. Strong knowledge of SQL for writing complex queries to validate data, test data pipelines, and check transformations. Experience with ETL tools (e.g., Matillion, Qlik Replicate) and their testing processes. Performance Testing Experience with version control systems like Git Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues. Strong communication and collaboration skills. Attention to detail and a passion for delivering high-quality solutions. Ability to work in a fast-paced environment and manage multiple priorities. Enthusiastic about learning new technologies and frameworks. Experience with the following tools and technologies are desired. QLIK Replicate Matillion ETL Snowflake Data Vault Warehouse Design Power BI Azure Cloud - Including Logic App, Azure Functions, ADF
Posted 1 month ago
1.0 - 3.0 years
3 - 6 Lacs
Chennai
Work from Office
Skill Set Required GCP, Data Modelling (OLTP, OLAP), indexing, DBSchema, CloudSQL, BigQuery Data Modeller - Hands-on data modelling for OLTP and OLAP systems. In-Depth knowledge of Conceptual, Logical and Physical data modelling. Strong understanding of Indexing, partitioning, data sharding with practical experience of having done the same. Strong understanding of variables impacting database performance for near-real time reporting and application interaction. Should have working experience on at least one data modelling tool, preferably DBSchema. People with functional knowledge of the mutual fund industry will be a plus. Good understanding of GCP databases like AlloyDB, CloudSQL and BigQuery.
Posted 1 month ago
13.0 - 17.0 years
37 - 45 Lacs
Noida
Work from Office
Involvement in solution planning Convert business specifications to technical specifications. Write clean codes and review codes of the project team members (as applicable) Adhere to Agile Delivery model. Able to solve L3 application related issues. Should be able to scale up on new technologies. Should be able document project artifacts. Technical Skills: Database and data warehouse skills, Object Oriented Programming, Design Patterns, and development knowledge. Azure Cloud experience with Cloud native development as well as migration of existing applications. Hands-on development and implementation experience in Azure Data Factory, Azure Databricks, Azure App services and Azure Service Bus. Agile development and DevSecOps understanding for end to end development life cycle is required. Experience in cutting edge OLAP cube technologies like Kyligence would be a plus Preferably worked in financial domain
Posted 1 month ago
5.0 - 10.0 years
9 - 13 Lacs
Gurugram
Work from Office
Develop, test and support future-ready data solutions for customers across industry verticals Develop, test, and support end-to-end batch and near real-time data flows/pipelines Demonstrate understanding in data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum of 5+ years of related experience required Experience in modeling and business system designs Good hands-on experience on DataStage, Cloud based ETL Services Have great expertise in writing TSQL code Well versed with data warehouse schemas and OLAP techniques Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate Must be a strong team player/leader Ability to lead Data transformation project with multiple junior data engineers Strong oral written and interpersonal skills for interacting and throughout all levels of the organization. Ability to clearly communicate complex business problems and technical solutions.
Posted 1 month ago
7.0 - 10.0 years
6 - 11 Lacs
Kolkata
Work from Office
The Senior PL/SQL SAS Mart Developer (BFSI) will be a crucial member of the data warehousing team, responsible for the design, development, and maintenance of data marts specifically tailored for the Banking, Financial Services, and Insurance (BFSI) domain. This role requires a strong combination of deep PL/SQL development skills for data transformation and manipulation within an Oracle environment, coupled with expertise in SAS for data mart creation, reporting, and analytical purposes. The successful candidate will leverage their extensive experience in the BFSI sector to build efficient, robust, and compliant data marts that support critical business intelligence, reporting, and analytical needs. Responsibilities: Data Mart Design and Development: Design and develop dimensional and multi-dimensional data marts based on business requirements within the BFSI context, utilizing both PL/SQL for backend data processing and SAS for data mart structuring and access. PL/SQL Development for Data Marts: Utilize advanced PL/SQL skills to extract, transform, and load data from the central data warehouse or source systems into the designated data marts, ensuring data quality, performance, and adherence to BFSI data standards. This includes developing complex stored procedures, functions, packages, and triggers. SAS Data Mart Implementation: Leverage SAS tools (e. g. , SAS Data Integration Studio, SAS Enterprise Guide, SAS OLAP Cube Studio) to structure, populate, and manage data marts for efficient reporting and analysis. BFSI Data Expertise: Apply a strong understanding of BFSI data models, key performance indicators (KPIs), regulatory reporting requirements (e. g. , BASEL, RBI, Solvency II), and common analytical needs within the financial services and insurance industries to design relevant and effective data marts. Performance Optimization: Optimize PL/SQL code and SAS processes related to data mart development and access to ensure high performance and efficient query execution for reporting and analytical tools. Data Quality and Governance: Implement and enforce data quality checks within both PL/SQL and SAS processes to ensure the accuracy, consistency, and integrity of data within the data marts, adhering to the banks data governance policies and BFSI-specific data quality standards. Troubleshooting and Support: Investigate and resolve issues related to data mart performance, data accuracy, and accessibility, involving both PL/SQL and SAS components. Provide expert-level support for data mart users. Documentation: Create and maintain comprehensive technical documentation for data mart designs, PL/SQL code, SAS jobs, data mappings, and user guides, ensuring compliance with BFSI documentation standards. Collaboration: Work closely with business analysts, report developers, data scientists, and other stakeholders within the BFSI departments to understand their data mart requirements and deliver solutions that meet their analytical and reporting needs. Security and Compliance: Ensure that data marts and the processes used to build and access them comply with the banks security policies and relevant BFSI regulatory requirements regarding data access and privacy. Mentoring: Provide technical guidance and mentorship to junior developers on PL/SQL and SAS skills related to data mart development within the BFSI domain. Required Skills and Experience: Bachelors degree in Computer Science, Information Technology, Statistics, Economics, Finance, or a related field. Proven experience of 7-10 years in developing data warehousing and business intelligence solutions, with a significant focus on data mart development within the Banking, Financial Services, and Insurance (BFSI) sector. Extensive and demonstrable expertise in PL/SQL development, including advanced querying, stored procedures, functions, packages, and performance tuning within an Oracle environment. Strong proficiency in SAS programming and experience using SAS tools for data mart creation and management (e. g. , SAS Data Integration Studio, SAS Enterprise Guide, SAS OLAP Cube Studio, SAS Metadata Server). Deep understanding of BFSI data models, common KPIs, and regulatory reporting requirements (e. g. , BASEL, RBI guidelines, Solvency II, IFRS). Strong SQL skills and experience working with relational databases, particularly Oracle.
Posted 1 month ago
5.0 - 10.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Develop, test and support future-ready data solutions for customers across industry verticals Develop, test, and support end-to-end batch and near real-time data flows/pipelines Demonstrate understanding in data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum of 5+ years of related experience required Experience in modeling and business system designs Good hands-on experience on DataStage, Cloud based ETL Services Have great expertise in writing TSQL code Well versed with data warehouse schemas and OLAP techniques Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate Must be a strong team player/leader Ability to lead Data transformation project with multiple junior data engineers Strong oral written and interpersonal skills for interacting and throughout all levels of the organization. Ability to clearly communicate complex business problems and technical solutions.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France