Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
4 - 8 Lacs
Chennai
On-site
DESCRIPTION Key Responsibilities: Own and develop advanced substitutability analysis frameworks combining text-based and visual matching capabilities Drive technical improvements to product matching models to enhance accuracy beyond current 79% in structured categories Design category-specific matching criteria, particularly for complex categories like fashion where accuracy is currently at 20% Develop and implement advanced image matching techniques including pattern recognition, style segmentation, and texture analysis Create performance measurement frameworks to evaluate product matching accuracy across different product categories Partner with multiple data and analytics teams to integrate various data signals Provide technical expertise in scaling substitutability analysis across 2000 different product types in multiple markets Technical Requirements: Deep expertise in developing hierarchical matching systems Strong background in image processing and visual similarity algorithms Experience with large-scale data analysis and model performance optimization Ability to work with multiple data sources and complex matching criteria Key job responsibilities Success Metrics: Drive improvement in substitutability accuracy to >70% across all categories Reduce manual analysis time for product matching identification Successfully implement enhanced visual matching capabilities Create scalable solutions for multi-market implementation A day in the life Design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for business intelligence analytics. Implement data structures using best practices in data modeling, ETL/ELT processes, SQL, Oracle, and OLAP technologies. Provide on-line reporting and analysis using OBIEE business intelligence tools and a logical abstraction layer against large, multi-dimensional datasets and multiple sources. Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture. Analyze source data systems and drive best practices in source teams. Participate in the full development life cycle, end-to-end, from design, implementation and testing, to documentation, delivery, support, and maintenance. Produce comprehensive, usable dataset documentation and metadata. Evaluate and make decisions around dataset implementations designed and proposed by peer data engineers. Evaluate and make decisions around the use of new or existing software products and tools. Mentor junior Business Research Analysts. About the team The RBS-Availability program includes Selection Addition (where new Head-Selections are added based on gaps identified by Selection Monitoring-SM), Buyability (ensuring new HS additions are buyable and recovering established ASINs that became non-buyable), SoROOS (rectify defects for sourceble out-of-stock ASINs ) Glance View Speed (offering ASINs with the best promise speed based on Store/Channel/FC level nuances), Emerging MPs, ASIN Productivity (To have every ASINS actual contribution profit to meet or exceed the estimate). The North-Star of the Availability program is to "Ensure all customer-relevant (HS) ASINs are available in Amazon Stores with guaranteed delivery promise at an optimal speed." To achieve this, we collaborate with SM, SCOT, Retail Selection, Category, and US-ACES to identify overall opportunities, defect drivers, and ingress across forecasting, sourcing, procurability, and availability systems, fixing them through UDE/Tech-based solutions. BASIC QUALIFICATIONS 5+ years of SQL experience Experience programming to extract, transform and clean large (multi-TB) data sets Experience with theory and practice of design of experiments and statistical analysis of results Experience with AWS technologies Experience in scripting for automation (e.g. Python) and advanced SQL skills. Experience with theory and practice of information retrieval, data science, machine learning and data mining PREFERRED QUALIFICATIONS Experience working directly with business stakeholders to translate between data and business needs Experience managing, analyzing and communicating results to senior leadership Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 22 hours ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Overview: As a Data Architect, you are responsible for designing and managing scalable, secure, and high-performance data architectures that support GEDU and customer needs. This role ensures that the GEDU’s data assets are structured and managed in a way that enables the business to generate insights, make data-driven decisions, and maintain data integrity across the GEDU and Customers. The Data Architect will work closely with business leaders, data engineers, data scientists, and IT teams to align the data architecture with the GEDU’s strategic goals. Key Responsibilities: Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with GEDU business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third-party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. To know our privacy policy, please click the link below: https://gedu.global/wp-content/uploads/2023/09/GEDU-Privacy-Policy-22092023-V2.0-1.pdf
Posted 23 hours ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Skills: Database programming - SQL/ PL-SQL/ T-SQL ETL - Data pipeline, data preparation Analytics - BI Tool Roles & Responsibilities • Implement some of the world's largest data size big data analytics projects using Kyvos platcirm • Preparation of data for BI modeling using Spark, Hive, SQL and other ETL/ELT OLAP Data Modelling • Tuning of models for fastest and sub second query performance from business intelligense tools • Communicate with customer stake holders for busin
Posted 1 day ago
7.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
On-site
Job Overview We are seeking a highly skilled and experienced Lead Data Engineer AWS to spearhead the design, development, and optimization of our cloud-based data infrastructure. As a technical leader, you will drive scalable data solutions using AWS services and modern data engineering tools, ensuring robust data pipelines and architectures for real-time and batch data processing. Responsibilities The ideal candidate is a hands-on technologist with a deep understanding of distributed data systems, cloud-native data services, and team leadership in Agile Responsibilities : Design, build, and maintain scalable, fault-tolerant, and secure data pipelines using AWS-native services (e.g., Glue, EMR, Lambda, S3, Redshift, Athena, Kinesis). Lead end-to-end implementation of data architecture strategies including ingestion, storage, transformation, and data governance. Collaborate with data scientists, analysts, and application developers to understand data requirements and deliver optimal solutions. Ensure best practices for data quality, data cataloging, lineage tracking, and metadata management using tools like AWS Glue Data Catalog or Apache Atlas. Optimize data pipelines for performance, scalability, and cost-efficiency across structured and unstructured data sources. Mentor and lead a team of data engineers, providing technical guidance, code reviews, and architecture recommendations. Implement data modeling techniques (OLTP/OLAP), partitioning strategies, and data warehousing best practices. Maintain CI/CD pipelines for data infrastructure using tools such as AWS CodePipeline, Git, and Monitor production systems and lead incident response and root cause analysis for data infrastructure issues. Drive innovation by evaluating emerging technologies and proposing improvements to existing data platform Skills & Qualifications : Minimum 7 years of experience in data engineering with at least 3+ years in a lead or senior engineering role. Strong hands-on experience with AWS data services: S3, Redshift, Glue, Lambda, EMR, Athena, Kinesis, RDS, DynamoDB. Advanced proficiency in Python/Scala/Java for ETL development and data transformation logic. Deep understanding of distributed data processing frameworks (e.g., Apache Spark, Hadoop). Solid grasp of SQL and experience with performance tuning in large-scale environments. Experience implementing data lakes, lakehouse architecture, and data warehousing solutions on cloud. Knowledge of streaming data pipelines using Kafka, Kinesis, or AWS MSK. Proficiency with infrastructure-as-code (IaC) using Terraform or AWS CloudFormation. Experience with DevOps practices and tools such as Docker, Git, Jenkins, and monitoring tools (CloudWatch, Prometheus, Grafana). Expertise in data governance, security, and compliance in cloud environments (ref:hirist.tech)
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior SQL Developer at our company, you will play a crucial role in our BI & analytics team by expanding and optimizing our data and data queries. Your responsibilities will include optimizing data flow and collection for consumption by our BI & Analytics platform. You should be an experienced data querying builder and data wrangler with a passion for optimizing data systems from the ground up. Collaborating with software developers, database architects, data analysts, and data scientists, you will support data and product initiatives and ensure consistent optimal data delivery architecture across ongoing projects. Your self-directed approach will be essential in supporting the data needs of multiple systems and products. If you are excited about enhancing our company's data architecture to support our upcoming products and data initiatives, this role is perfect for you. Your essential functions will involve creating and maintaining optimal SQL queries, Views, Tables, and Stored Procedures. By working closely with various business units such as BI, Product, and Reporting, you will contribute to developing the data warehouse platform vision, strategy, and roadmap. Understanding physical and logical data models and ensuring high-performance access to diverse data sources will be key aspects of your role. Encouraging the adoption of organizational frameworks through documentation, sample code, and developer support will also be part of your responsibilities. Effective communication of progress and effectiveness of developed frameworks to department heads and managers will be essential. To be successful in this role, you should possess a Bachelor's or Master's degree or equivalent combination of education and experience in a relevant field. Proficiency in T-SQL, Data Warehouses, Star Schema, Data Modeling, OLAP, SQL, and ETL is required. Experience in creating Tables, Views, and Stored Procedures is crucial. Familiarity with BI and Reporting Platforms, industry trends, and knowledge of multiple database platforms like SQL Server and MySQL are necessary. Proficiency in Source Control and Project Management tools such as Azure DevOps, Git, and JIRA is expected. Experience with SonarQube for clean coding T-SQL practices and DevOps best practices will be advantageous. Applicants must have exceptional written and spoken communication skills and strong team-building abilities to contribute to making strategic decisions and advising senior management on technical matters. With at least 5+ years of experience in a data warehousing position, including working as a SQL Developer, and experience in system development lifecycle, you should also have a proven track record in data integration, consolidation, enrichment, and aggregation. Strong analytical skills, attention to detail, organizational skills, and the ability to mentor junior colleagues will be crucial for success in this role. This full-time position requires flexibility to support different time zones between 12 PM IST to 9 PM IST, Monday through Friday. You will work in a Hybrid Mode and spend at least 2 days working from the office in Hyderabad. Occasional evening and weekend work may be expected based on client needs or job-related emergencies. This job description may not cover all responsibilities and duties, which may change with or without notice.,
Posted 1 day ago
10.0 - 14.0 years
0 Lacs
dehradun, uttarakhand
On-site
As a Data Modeler, your primary responsibility will be to design and develop conceptual, logical, and physical data models supporting enterprise data initiatives. You will work with modern storage formats like Parquet and ORC, and build and optimize data models within Databricks Unity Catalog. Collaborating with data engineers, architects, analysts, and stakeholders, you will ensure alignment with ingestion pipelines and business goals. Translating business and reporting requirements into robust data architecture, you will follow best practices in data warehousing and Lakehouse design. Your role will involve maintaining metadata artifacts, enforcing data governance, quality, and security protocols, and continuously improving modeling processes. You should have over 10 years of hands-on experience in data modeling within Big Data environments. Your expertise should include OLTP, OLAP, dimensional modeling, and enterprise data warehouse practices. Proficiency in modeling methodologies like Kimball, Inmon, and Data Vault is essential. Hands-on experience with modeling tools such as ER/Studio, ERwin, PowerDesigner, SQLDBM, dbt, or Lucidchart is preferred. Experience in Databricks with Unity Catalog and Delta Lake is required, along with a strong command of SQL and Apache Spark for querying and transformation. Familiarity with the Azure Data Platform, including Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database, is beneficial. Exposure to Azure Purview or similar data cataloging tools is a plus. Strong communication and documentation skills are necessary for this role, as well as the ability to work in cross-functional agile environments. A Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field is required. Certifications such as Microsoft DP-203: Data Engineering on Microsoft Azure are a plus. Experience working in agile/scrum environments and exposure to enterprise data security and regulatory compliance frameworks like GDPR and HIPAA are advantageous.,
Posted 1 day ago
5.0 years
0 Lacs
India
Remote
Job Title: Senior Data Modeler Location: PAN India - FTE, Contract - Remote Job Summary: We are looking for a highly skilled Senior Data Modeler with deep expertise in dimensional modeling, data warehouse design, and hands-on experience with data modeling tools. The ideal candidate should be able to design conceptual, logical, and physical models, implement SCD Type 2 slowly changing dimensions, and work collaboratively across business and engineering teams. Strong SQL, MDM understanding, and cloud exposure are key. Key Responsibilities: Develop conceptual, logical, and physical data models to support enterprise data initiatives. Design and optimize dimensional models for OLAP systems; understand OLTP vs OLAP and their associated schemas (Star, Snowflake). Implement SCD Type 2 logic for maintaining historical data. Create surrogate keys , and handle many-to-many relationships efficiently in data models. Translate functional and technical requirements into data models in coordination with architects and platform teams. Create source-to-target mappings (STTM) and technical design documents for development teams. Collaborate with Data Stewards , Business Analysts , Data Engineers , Scrum Masters , and Governance Teams . Validate and test data models, ensuring high data quality , consistency, and compliance. Provide support for metadata management , MDM , and data governance initiatives . Work with CI/CD pipelines for model deployment and Cloud platforms (preferred: Azure/AWS/GCP). Must-Have Skills: 5+ years of data modeling experience across conceptual, logical, and physical layers. Strong command over dimensional modeling , normalization/denormalization , SCD2 , and data warehousing . Proficiency in data modeling tools (e.g., Erwin, ER/Studio, SAP PowerDesigner). Strong hands-on experience in SQL and understanding of data profiling/validation . Experience with metadata and master data management (MDM) . Experience in surrogate key creation, handling factless fact tables , and designing dimension and fact tables . Excellent communication and stakeholder engagement skills. Nice-to-Have Skills: Experience with cloud platforms (Azure/AWS/GCP) in context of data modeling. Exposure to Python for validation scripts or data pipeline development. Familiarity with data cataloging tools and CI/CD for model deployment. Experience working in agile/scrum environments . Stakeholders You Will Work With: Data Stewards/Business Analysts – For business definitions and data dictionary alignment. Program Managers/Scrum Masters – For delivery timelines and scope alignment. Data Engineers – For model implementation and validation. Governance/Platform Team – For approvals and deployment management. Business SMEs – To refine requirements and ensure functional clarity. Red Flags (to avoid in candidate profile): Only discussing data ingestion/pipelines without focus on modeling principles. Lack of exposure to conceptual/logical/physical modeling as core responsibilities. Emphasis only on data analysis or governance without clear modeling experience.
Posted 1 day ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Opportunity We are seeking a highly skilled and experienced Analytics Specialist to design, develop, and deliver robust data-driven solutions using Power BI, Power Apps, and related Microsoft technologies. The ideal candidate will have strong analytical skills, hands-on experience in AI projects, and a deep understanding of business intelligence tools and data modeling. How You’ll Make An Impact Design and develop Power BI reports, dashboards, and data models to meet business requirements. Manage the PBI/Power apps/Ai projects independently and work with global stakeholders. Administer Power BI service and integrate reports with other business applications. Create and manage OLAP cubes and tabular models compatible with data warehouse standards. Perform advanced DAX calculations and build efficient data models. Ensure security compliance through implementation of row-level security and access controls. Collaborate with cross-functional teams to understand reporting needs and deliver actionable insights. Maintain documentation and provide knowledge transfer to stakeholders. Contribute to AI-based analytics projects and drive automation using APIs and embedded analytics. Manage and deliver Q&O monthly performance reports with high accuracy and timeliness. Continuously validate, automate, and improve reporting quality to ensure data integrity and actionable insights. Managing multiple stakeholders across functions and business lines, requiring strong influence skills. Leading projects independently with limited supervision; strong ownership and accountability needed. Integrating data from multiple systems and maintaining reporting consistency. Communicating insights effectively to senior leaders and diverse teams; ability to simplify complex data. Driving and managing analytics/reporting projects end-to-end, including scope, timelines, delivery, and stakeholder engagement. Capture business requirements and transform them into efficient Power BI dashboards, KPI scorecards, and reports. Build and maintain Analysis Services reporting models and develop scalable data models aligned with BI best practices. Interact with BU teams to identify improvement opportunities and implement enhancement strategies. Seek user feedback for enhancements and remain updated with trends in performance and analytics. Responsible to ensure compliance with applicable external and internal regulations, procedures, and guidelines. Living Hitachi Energy’s core values of safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Your Background Graduate/Postgraduate in Engineering, Finance, Business Management, Data Science, Statistics, Mathematics, or similar quantitative field. Minimum 7 years of experience. Power BI (development, DAX, publishing, and scheduling). Hands on experience in Power Apps, SQL Data Warehouse, SSAS, OLAP CUBE, Microsoft Azure, Visual Studio. Exposure to AI and automation projects. Microsoft DA-100 certification preferred. Proficiency in both spoken & written English language is required. Hitachi Energy is a global technology leader in electrification, powering a sustainable energy future through innovative power grid technologies with digital at the core. Over three billion people depend on our technologies to power their daily lives. With over a century in pioneering mission-critical technologies like high-voltage, transformers, automation, and power electronics, we are addressing the most urgent energy challenge of our time – balancing soaring electricity demand, while decarbonizing the power system. Headquartered in Switzerland, we employ over 50,000 people in 60 countries and generate revenues of around $16 billion USD. We welcome you to apply today.
Posted 1 day ago
5.0 years
4 - 8 Lacs
Chennai
On-site
- 5+ years of SQL experience - Experience programming to extract, transform and clean large (multi-TB) data sets - Experience with theory and practice of design of experiments and statistical analysis of results - Experience with AWS technologies - Experience in scripting for automation (e.g. Python) and advanced SQL skills. - Experience with theory and practice of information retrieval, data science, machine learning and data mining Key Responsibilities: Own and develop advanced substitutability analysis frameworks combining text-based and visual matching capabilities Drive technical improvements to product matching models to enhance accuracy beyond current 79% in structured categories Design category-specific matching criteria, particularly for complex categories like fashion where accuracy is currently at 20% Develop and implement advanced image matching techniques including pattern recognition, style segmentation, and texture analysis Create performance measurement frameworks to evaluate product matching accuracy across different product categories Partner with multiple data and analytics teams to integrate various data signals Provide technical expertise in scaling substitutability analysis across 2000 different product types in multiple markets Technical Requirements: Deep expertise in developing hierarchical matching systems Strong background in image processing and visual similarity algorithms Experience with large-scale data analysis and model performance optimization Ability to work with multiple data sources and complex matching criteria Key job responsibilities Success Metrics: Drive improvement in substitutability accuracy to >70% across all categories Reduce manual analysis time for product matching identification Successfully implement enhanced visual matching capabilities Create scalable solutions for multi-market implementation A day in the life Design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for business intelligence analytics. Implement data structures using best practices in data modeling, ETL/ELT processes, SQL, Oracle, and OLAP technologies. Provide on-line reporting and analysis using OBIEE business intelligence tools and a logical abstraction layer against large, multi-dimensional datasets and multiple sources. Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture. Analyze source data systems and drive best practices in source teams. Participate in the full development life cycle, end-to-end, from design, implementation and testing, to documentation, delivery, support, and maintenance. Produce comprehensive, usable dataset documentation and metadata. Evaluate and make decisions around dataset implementations designed and proposed by peer data engineers. Evaluate and make decisions around the use of new or existing software products and tools. Mentor junior Business Research Analysts. About the team The RBS-Availability program includes Selection Addition (where new Head-Selections are added based on gaps identified by Selection Monitoring-SM), Buyability (ensuring new HS additions are buyable and recovering established ASINs that became non-buyable), SoROOS (rectify defects for sourceble out-of-stock ASINs ) Glance View Speed (offering ASINs with the best promise speed based on Store/Channel/FC level nuances), Emerging MPs, ASIN Productivity (To have every ASINS actual contribution profit to meet or exceed the estimate). The North-Star of the Availability program is to "Ensure all customer-relevant (HS) ASINs are available in Amazon Stores with guaranteed delivery promise at an optimal speed." To achieve this, we collaborate with SM, SCOT, Retail Selection, Category, and US-ACES to identify overall opportunities, defect drivers, and ingress across forecasting, sourcing, procurability, and availability systems, fixing them through UDE/Tech-based solutions. Experience working directly with business stakeholders to translate between data and business needs Experience managing, analyzing and communicating results to senior leadership Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 day ago
10.0 - 12.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
TCS present an excellent opportunity for Data architect Job Description: Skills: AWS, Glue, Redshift, PySpark Location: Pune / Kolkata Experience: 10 to 12 Years Strong hands-on experience in Python programming and PySpark. Experience using AWS services (RedShift, Glue, EMR, S3 & Lambda) Experience working with Apache Spark and Hadoop ecosystem. Experience in writing and optimizing SQL for data manipulations. Good Exposure to scheduling tools. Airflow is preferable. Must – Have Data Warehouse Experience with AWS Redshift or Hive. Experience in implementing security measures for data protection. Expertise to build/test complex data pipelines for ETL processes (batch and near real time) Readable documentation of all the components being developed. Knowledge of Database technologies for OLTP and OLAP workloads.
Posted 1 day ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position : Power BI Architect Location : Hyderabad, Telangana, India Experience : 8–12 years Role Overview You will architect and deliver end‑to‑end enterprise BI solutions. This includes data ingestion, transformation, modelling, and dashboard/report development with Power BI. You will collaborate closely with stakeholders and lead junior team members to ensure high‑quality insights at scale. Key Responsibilities Architecture & Design Design scalable BI architectures, including semantic layers, data models, ETL/ELT pipelines, dashboards, and embedded analytics platforms. Data Integration & ETL Ingest, transform, and cleanse data from multiple sources (SQL, Oracle, Azure Synapse/Data Lake/Fabric, AWS services). Modeling & Query Optimization Build robust data models; write optimized DAX expressions and Power Query M code; apply performance tuning best practices. Solution Delivery Develop reports and dashboards using Power BI Desktop and Service, implement row-level/object-level security (RLS/OLS), capacity planning, and self-service BI frameworks. Cross-Platform Competency Collaborate with teams using MicroStrategy and Tableau; advise on best‑fit tools where relevant. Governance, Documentation & Quality Maintain data dictionaries, metadata, source‑to‑target mappings; support data governance initiatives. Leadership & Stakeholder Management Manage small to mid-sized developer teams, mentor juniors, engage with business users, and support pre-sales or proposal efforts. Required Qualifications & Skills Bachelor’s/Master’s degree in CS, IT, or related field. 8–12 years overall, with 5+ years of hands‑on Power BI architecture and development experience. Deep proficiency with Power BI Desktop & Service, DAX, Power Query (M), SQL/SSIS, and OLAP/tabular modeling. Strong experience in Azure frameworks such as Synapse, Fabric, and cloud-based ETL/data pipelines; AWS exposure is a plus. Experience with Tableau/MicroStrategy or other BI tools. Familiarity with Python or R for data transformations or analytics. Certification like Microsoft Certified: Power BI/Data Analyst Associate preferred. Excellent verbal and written communication skills; stakeholder-facing experience mandatory.
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
Working at AxionConnect is more than just a job. It's an opportunity to grow, learn, and thrive in a supportive and inspiring environment. We prioritize the well-being and development of our employees, offering a range of benefits and resources to enhance their professional and personal lives. From training and mentorship programs to flexible work arrangements and employee engagement initiatives, we are committed to ensuring that our team members have the tools and support they need to succeed. Reputation and Industry Leadership As a prominent ITES brand, AxionConnect has established a strong reputation for delivering innovative and cutting-edge solutions to our clients. With a proven track record of success, we have become a trusted partner in the industry, working with leading organizations across various sectors. Culture of Excellence We foster a culture of excellence, encouraging our employees to push boundaries, think creatively, and continuously strive for improvement. We invest in the latest technologies and provide a conducive work environment that empowers our team members to excel in their respective roles. Collaborative and Inclusive Environment At AxionConnect, we believe that collaboration and inclusivity drive innovation. We encourage open communication, teamwork, and knowledge sharing, creating an environment where diverse perspectives are valued and embraced. Our inclusive culture enables our employees to thrive and achieve their full potential. Career Growth Opportunities We are committed to the professional growth and development of our employees. We provide ongoing training, mentorship, and advancement opportunities, allowing individuals to expand their skill sets, take on new challenges, and progress in their careers within AxionConnect. If you are passionate about innovation, collaboration, and making a difference, we invite you to join our community at AxionConnect. We are always seeking talented individuals who are driven to excel and thrive in a dynamic and fast-paced environment. Together, we can create meaningful solutions and shape the future of the ITES industry. Tech Jobs: Principal Consultant BI/DI We're seeking a skilled SAS BI Developer to enhance our tech website's data-driven capabilities. Your role involves automating reporting processes, guiding users in SAS tool utilization, and ensuring data integrity. Join us to optimize data analysis, ETL, and reporting through your expertise in SAS BI, DW/BI concepts, and advanced analytics. The candidate should be well aware of the domain she/he has worked in for a project. Should have in-depth knowledge of DW/BI concepts, Dimensional Modeling, and experience in using SAS Data Integration Studio for data extraction, transformation, and loading. Expertise in creating complex reports, ETL processes, job scheduling, and user roles. Hands-on experience in Base SAS, Advance SAS Programmer, SAS Macros, SAS/Access, SQL, and optimization techniques, along with Unix Operating Systems and scheduling tools knowledge is required. Strong experience in Data Validation, Data Quality, and Error handling is preferred. Desired profile of the candidate: Hands-on experience in Base SAS, Advance SAS Programmer, SAS Macros, SAS/Access, SQL, and optimization techniques, SAS DI/BI Developer.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
As a Senior JEDOX Developer at Siemens Energy, your primary responsibility will involve working closely with global business users to address tickets submitted via SharePoint or Mailbox. You will collaborate with IT development and middleware teams to identify and implement solutions aligned with agreed operation and service level agreements. Additionally, you will play a key role in the monthly closing process, ensuring data accuracy and coordinating with end users. Attending sprint development meetings and engaging with collaborators and senior management will be essential to your role, helping you expand your network and prepare for future global responsibilities within Siemens Energy. Your impact will be significant as you lead the design, development, and implementation of data pipelines and ETL workflows. You will be tasked with managing and optimizing workflows for efficient data processing, designing data solutions in databases, and proactively developing reports with minimal documented requirements. Collaborating with cross-functional teams to translate requirements into scalable data architecture and fostering continuous improvement and innovation will be key aspects of your role. To excel in this position, you should have at least 6 years of experience in IT, preferably with a background in Engineering or a related field. Your expertise should include 4+ years of experience in ETL workflows, data analytics, reporting tools like Power BI and Tableau, and working with cloud databases such as SNOWFLAKE. Familiarity with EPM tools like JEDOX, ANAPLAN, or TM1, multidimensional database concepts, Power Automate workflows, and Excel formulas will be advantageous. Your ability to adapt to new technologies and thrive in a fast-paced environment, collaborate effectively with business users, and stay informed about industry trends are essential qualities for this role. Joining the Value Center Manufacturing team at Siemens Energy means being part of a dynamic group focused on driving digital transformation in manufacturing. You will contribute to innovative projects that impact the business and industry, playing a vital role in achieving Siemens Energy's objectives. The Digital Core team supports Business Areas by delivering top-notch IT, Strategy & Technology solutions. Siemens Energy is a global energy technology company with a diverse workforce committed to sustainable and reliable energy solutions. Our emphasis on diversity fuels our creativity and innovation, allowing us to harness the power of inclusion across over 130 nationalities. At Siemens Energy, we prioritize decarbonization, new technologies, and energy transformation to drive positive change in the energy sector. As a Siemens Energy employee, you will enjoy benefits such as Medical Insurance coverage for yourself and eligible family members, including a Family floater cover. Additionally, you will have the option to opt for a Meal Card as part of your CTC, providing tax-saving benefits as per company policy. Siemens Energy is dedicated to creating a supportive and inclusive work environment where individuals from all backgrounds can thrive and contribute to our shared success. Join us in shaping the future of energy and making a meaningful impact on society.,
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
dehradun, uttarakhand
On-site
You should have familiarity with modern storage formats like Parquet and ORC. Your responsibilities will include designing and developing conceptual, logical, and physical data models to support enterprise data initiatives. You will build, maintain, and optimize data models within Databricks Unity Catalog, developing efficient data structures using Delta Lake to optimize performance, scalability, and reusability. Collaboration with data engineers, architects, analysts, and stakeholders is essential to ensure data model alignment with ingestion pipelines and business goals. You will translate business and reporting requirements into a robust data architecture using best practices in data warehousing and Lakehouse design. Additionally, maintaining comprehensive metadata artifacts such as data dictionaries, data lineage, and modeling documentation is crucial. Enforcing and supporting data governance, data quality, and security protocols across data ecosystems will be part of your role. You will continuously evaluate and improve modeling processes. The ideal candidate will have 10+ years of hands-on experience in data modeling in Big Data environments. Expertise in OLTP, OLAP, dimensional modeling, and enterprise data warehouse practices is required. Proficiency in modeling methodologies including Kimball, Inmon, and Data Vault is expected. Hands-on experience with modeling tools like ER/Studio, ERwin, PowerDesigner, SQLDBM, dbt, or Lucidchart is preferred. Proven experience in Databricks with Unity Catalog and Delta Lake is necessary, along with a strong command of SQL and Apache Spark for querying and transformation. Experience with the Azure Data Platform, including Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database is beneficial. Exposure to Azure Purview or similar data cataloging tools is a plus. Strong communication and documentation skills are required, with the ability to work in cross-functional agile environments. Qualifications for this role include a Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field. Certifications such as Microsoft DP-203: Data Engineering on Microsoft Azure are desirable. Experience working in agile/scrum environments and exposure to enterprise data security and regulatory compliance frameworks (e.g., GDPR, HIPAA) are also advantageous.,
Posted 2 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About the Company Creospan is a growing tech collective of makers, shakers, and problem solvers, offering solutions today that will propel businesses into a better tomorrow. “Tomorrow’s ideas, built today!” In addition to being able to work alongside equally brilliant and motivated developers, our consultants appreciate the opportunity to learn and apply new skills and methodologies to different clients and industries. Job Title: Data Modeler Location: Pune (Pan India relocation is considerable - High preference is Pune) Hybrid: 3 days WFO & 2 days WFH Shift timings: UK Working Hours (9AM — 5PM GMT) Notice period: Immediate Gap: Upto 3 Months (Strictly not more than that) Project Overview: Creation and management of business data models in all their forms, including conceptual models, logical data models and physical data models (relational database designs, message models and others). Expert level understanding of relational database concepts, dimensional database concepts and database architecture and design, ontology and taxonomy design. Background working with key data domains as account, holding and transactions within security servicing or asset management space. Expertise in designing data driven solution on Snowflake for complex business needs. Knowledge of entire application lifecycle including Design, Development, Deployment, Operation and Maintenance in an Agile and DevOps culture. Role: This person strengthens the impact of, and provides recommendations on data-models and architecture that will need to be available and shared consistently across the TA organization through the identification, definition and analysis of how data related assets aid business outcomes. The Data Modeler\Architect is responsible for making data trusted, understood and easy to use. They will be responsible for the entire lifecycle of the data architectural assets, from design and development to deployment, operation and maintenance, with a focus on automation and quality. Must Have Skills: 10+ years of experience in Enterprise-level Data Architecture, Data Modelling, and Database Engineering Expertise in OLAP & OLTP design, Data Warehouse solutions, ELT/ETL processes Proficiency in data modelling concepts and practices such as normalization, denormalization, and dimensional modelling (Star Schema, Snowflake Schema, Data Vault, Medallion Data Lake) Experience with Snowflake-specific features, including clustering, partitioning, and schema design best practices Proficiency in Enterprise Modelling tools - Erwin, PowerDesigner, IBM Infosphere etc. Strong experience in Microsoft Azure data pipelines (Data Factory, Synapse, SQL DB, Cosmos DB, Databricks) Familiarity with Snowflake’s native tools and services including Snowflake Data Sharing, Snowflake Streams & Tasks, and Snowflake Secure Data Sharing Strong knowledge of SQL performance tuning, query optimization, and indexing strategies Strong verbal and written communication skills for collaborating with both technical teams and business stakeholders Working knowledge of BIAN, ACORD, ESG risk data integration Nice to Haves: At least 3+ in security servicing or asset Management/investment experience is highly desired Understanding of software development life cycle including planning, development, quality assurance, change management and release management Strong problem-solving skills and ability to troubleshoot complex issues Excellent communication and collaboration skills to work effectively in a team environment Self-motivated and ability to work independently with minimal supervision Excellent communication skills: experience in communicating with tech and non-tech teams Deep understanding of data and information architecture, especially in asset management space Familiarity with MDM, data vault, and data warehouse design and implementation techniques Business domain, data/content and process understanding (which are more important than technical skills). Being techno functional is a plus Good presentation skills in creating Data Architecture diagrams Data modelling and information classification expertise at the project and enterprise level Understanding of common information architecture frameworks and information models Experience with distributed data and analytics platforms in cloud and hybrid environments. Also an understanding of a variety of data access and analytic approaches (for example, microservices and event-based architectures) Knowledge of problem analysis, structured analysis and design, and programming techniques Python, R
Posted 2 days ago
12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Summary... What you'll do... About team: Walmart’s Enterprise Business Services (EBS) is a powerhouse of several exceptional teams delivering world-class technology solutions and services making a profound impact at every level of Walmart. As a key part of Walmart Global Tech, our teams set the bar for operational excellence and leverage emerging technology to support millions of customers, associates, and stakeholders worldwide. Each time an associate turns on their laptop, a customer makes a purchase, a new supplier is onboarded, the company closes the books, physical and legal risk is avoided, and when we pay our associates consistently and accurately, that is EBS. Joining EBS means embarking on a journey of limitless growth, relentless innovation, and the chance to set new industry standards that shape the future of Walmart. What you'll do: Manage a high performing team of 10-12 engineers who work across multiple technology stacks including Java and Mainframe Drive design, development, implementation and documentation Establish best engineering and operational excellence practices based on product, engineering and scrum metrics Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. Engage with Product and Business stakeholders to drive the agenda, set the priorities and deliver scalable and resilient products. Work closely with the Architects and cross functional teams and follow established practices for the delivery of solutions meeting QCD (Quality, Cost & Delivery) within the established architectural guidelines. Work with senior leadership to chart out the future roadmap of the products Participate in hiring, mentoring and building high performing agile teams. Participating in organizational events like hackathons, demo days etc. and be the catalyst towards the success of those events Interact closely for requirements with Business owners and technical teams both within India and across the globe. W hat you'll bring: Bachelor's/Master’s degree in Computer Science, engineering, or related field, with minimum 12+ years of experience in software development and at least 5+ years of experience in managing engineering teams. Have prior experience in managing high performing agile technology teams. Hands on experience building Java-Scala-Spark based backend systems is a must, and experience of working in cloud based solutions is desirable Proficiency in Javascript, NodeJS, ReactJS and NextJS is desirable. A good understanding of CS Fundamentals, Microservices, Data Structures, Algorithms & Problem Solving Should have exposed to CI/CD development environments/tools including, but not limited to, Git, Maven, Jenkins. Strong in writing modular and testable code and test cases (unit, functional and integration) using frameworks like JUnit, Mockito, and Mock MVC Should be experienced in microservices architecture. Posseses good understanding of distributed concepts, common design principles, design patterns and cloud native development concepts. Hands-on experience in Spring boot, concurrency, garbage collection, RESTful services, data caching services and ORM tools. Experience working with Relational Database and writing complex OLAP, OLTP and SQL queries. Experience in working with NoSQL Databases like cosmos DB. Experience in working with Caching technology like Redis, Mem cache or other related Systems. Good knowledge in Pub sub system like Kafka. Experience utilizing monitoring and alert tools like Prometheus, Splunk, and other related systems and excellent in debugging and troubleshooting issues. Exposure to Containerization tools like Docker, Helm, Kubernetes. Knowledge of public cloud platforms like Azure, GCP etc. will be an added advantage. About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer – By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions – while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor's degree in computer science, computer engineering, computer information systems, software engineering, or related area and 5 years’ experience in software engineering or related area. Option 2: 7 years’ experience in software engineering or related area. 2 years’ supervisory experience. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Master’s degree in computer science, computer engineering, computer information systems, software engineering, or related area and 3 years' experience in software engineering or related area. Primary Location... Rmz Millenia Business Park, No 143, Campus 1B (1St -6Th Floor), Dr. Mgr Road, (North Veeranam Salai) Perungudi , India R-2244602
Posted 2 days ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Lead Data Engineer QA Rank – Manager Location – Bengaluru/Chennai/Kerela/Kolkata Objectives and Purpose The Lead Data Engineer QA will be responsible for testing business intelligence and data warehouse solutions, both in on-premises and cloud platforms. We are seeking an innovative and talented individual who can create test plans, protocols, and procedures for new software. In addition, you will be supporting build of large-scale data architectures that provide information to downstream systems and business users. Your Key Responsibilities Design and execute manual and automatic test cases, including validating alignment with ELT data integrity and compliance. Support conducting QA test case designs, including identifying opportunities for test automation and developing scripts for automatic processes as needed. Follow quality standards, conduct continuous monitoring and improvement, and manage test cases, test data, and defect processes using a risk-based approach as needed. Ensure all software releases meet regulatory standards, including requirements for validation, documentation, and traceability, with particular emphasis on data privacy and adherence to infrastructure security best practices. Proactively foster strong partnerships across teams and stakeholders to ensure alignment with quality requirements and address any challenges. Implement observability within testing processes to proactively identify, track, and resolve quality issues, contributing to sustained high-quality performance. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality Work closely with product team to monitor data quality, integrity, and security throughout the product lifecycle, implementing data quality checks to ensure accuracy, completeness, and consistency. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Warehousing, or related field 10+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Understanding of project and test lifecycle, including exposure to CMMi and process improvement frameworks Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Understanding of business intelligence concepts, ETL processing, dashboards, and analytics Testing experience in Data Quality, ETL, OLAP, or Reports Knowledge in Data Transformation Projects, including database design concepts & white box testing Experience in cloud based data solution – AWS/Azure Demonstrated understanding and experience using: Cloud-based data solutions (AWS, IICS, Databricks) GXP and regulatory and risk compliance Cloud AWS infrastructure testing Python data processing SQL scripting Test processes (e.g., ELT testing, SDLC) Power BI/Tableau Script (e.g., perl and shell) Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Test Management and Defect Management tools (e.g., HP ALM) Cloud platform deployment and tools (e.g., Kubernetes) DevOps and continuous integration Databricks/ETL Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
10.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Lead Data Engineer QA Rank – Manager Location – Bengaluru/Chennai/Kerela/Kolkata Objectives and Purpose The Lead Data Engineer QA will be responsible for testing business intelligence and data warehouse solutions, both in on-premises and cloud platforms. We are seeking an innovative and talented individual who can create test plans, protocols, and procedures for new software. In addition, you will be supporting build of large-scale data architectures that provide information to downstream systems and business users. Your Key Responsibilities Design and execute manual and automatic test cases, including validating alignment with ELT data integrity and compliance. Support conducting QA test case designs, including identifying opportunities for test automation and developing scripts for automatic processes as needed. Follow quality standards, conduct continuous monitoring and improvement, and manage test cases, test data, and defect processes using a risk-based approach as needed. Ensure all software releases meet regulatory standards, including requirements for validation, documentation, and traceability, with particular emphasis on data privacy and adherence to infrastructure security best practices. Proactively foster strong partnerships across teams and stakeholders to ensure alignment with quality requirements and address any challenges. Implement observability within testing processes to proactively identify, track, and resolve quality issues, contributing to sustained high-quality performance. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality Work closely with product team to monitor data quality, integrity, and security throughout the product lifecycle, implementing data quality checks to ensure accuracy, completeness, and consistency. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Warehousing, or related field 10+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Understanding of project and test lifecycle, including exposure to CMMi and process improvement frameworks Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Understanding of business intelligence concepts, ETL processing, dashboards, and analytics Testing experience in Data Quality, ETL, OLAP, or Reports Knowledge in Data Transformation Projects, including database design concepts & white box testing Experience in cloud based data solution – AWS/Azure Demonstrated understanding and experience using: Cloud-based data solutions (AWS, IICS, Databricks) GXP and regulatory and risk compliance Cloud AWS infrastructure testing Python data processing SQL scripting Test processes (e.g., ELT testing, SDLC) Power BI/Tableau Script (e.g., perl and shell) Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Test Management and Defect Management tools (e.g., HP ALM) Cloud platform deployment and tools (e.g., Kubernetes) DevOps and continuous integration Databricks/ETL Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About Media.net : Media.net is a leading, global ad tech company that focuses on creating the most transparent and efficient path for advertiser budgets to become publisher revenue. Our proprietary contextual technology is at the forefront of enhancing Programmatic buying, the latest industry standard in ad buying for digital platforms. The Media.net platform powers major global publishers and ad-tech businesses at scale across ad formats like display, video, mobile, native, as well as search. Media.net’s U.S. HQ is based in New York, and the Global HQ is in Dubai. With office locations and consultant partners across the world, Media.net takes pride in the value-add it offers to its 50+ demand and 21K+ publisher partners, in terms of both products and services. Responsibilities (What You’ll Do) Infrastructure Management: Oversee and maintain the infrastructure that supports the ad exchange applications. This includes load balancers, data stores, CI/CD pipelines, and monitoring stacks. Continuously improve infrastructure resilience, scalability, and efficiency to meet the demands of massive request volume and stringent latency requirements. Developing policies and procedures that improve overall platform stability and participate in shared On-call schedule Collaboration with Developers: Work closely with developers to establish and uphold quality and performance benchmarks, ensuring that applications meet necessary criteria before they are deployed to production. Participate in design reviews and provide feedback on infrastructure-related aspects to improve system performance and reliability. Building Tools for Infra Management: Develop tools to simplify and enhance infrastructure management, automate processes, and improve operational efficiency. These tools may address areas such as monitoring, alerting, deployment automation, and failure detection and recovery, which are critical in minimizing latency and maintaining uptime. Performance Optimization: Focus on reducing latency and maximizing efficiency across all components, from request handling in load balancers to database optimization. Implement best practices and tools for performance monitoring, including real-time analysis and response mechanisms. Who Should Apply B.Tech/M.Tech or equivalent in Computer Science, Information Technology, or a related field. 2–4 years of experience managing services in large-scale distributed systems. Strong understanding of networking concepts (e.g., TCP/IP, routing, SDN) and modern software architectures. Proficiency in programming and scripting languages such as Python, Go, or Ruby, with a focus on automation. Experience with container orchestration tools like Kubernetes and virtualization platforms (preferably GCP). Ability to independently own problem statements, manage priorities, and drive solutions. Preferred Skills & Tools Expertise: Infrastructure as Code: Experience with Terraform. Configuration management tools like Nix, Ansible. Monitoring and Logging Tools: Expertise with Prometheus, Grafana, or ELK stack. OLAP databases : Clickhouse and Apache druid. CI/CD Pipelines: Hands-on experience with Jenkins, or ArgoCD. Databases: Proficiency in MySQL (relational) or Redis (NoSQL). Load Balancers Servers: Familiarity with haproxy or Nginx. Strong knowledge of operating systems and networking fundamentals. Experience with version control systems such as Git.
Posted 2 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description & Summary: We are looking for a skilled Azure Cloud Data Engineer with strong expertise in Python programming , Databricks , and advanced SQL to join our team in Noida . The candidate will be responsible for designing, developing, and optimizing scalable data solutions on the Azure cloud platform. You will play a critical role in building data pipelines and transforming complex data into actionable insights by leveraging cloud-native tools and technologies. Level: Senior Consultant / Manager Location: Noida LOS: Competency: Data & Analytics Skill: Azure Data Engineering Job Position Title: Azure Cloud Data Engineer with Python Programming – Senior Consultant/Manager (6+ Years) Responsibilities: · Design, develop, and manage scalable and secure data pipelines using Azure Databricks and Azure Data Factory. · Write clean, efficient, and reusable code primarily in Python for cloud automation, data processing, and orchestration. · Architect and implement cloud-based data solutions, integrating structured and unstructured data sources. · Build and optimize ETL workflows and ensure seamless data integration across platforms. · Develop data models using normalization and denormalization techniques to support OLTP and OLAP systems. · Manage Azure-based storage solutions including Azure Data Lake and Blob Storage. · Troubleshoot performance bottlenecks in data flows and ETL processes. · Integrate advanced analytics and support BI use cases within the Azure ecosystem. · Lead code reviews and ensure adherence to version control practices (e.g., Git). · Contribute to the design and deployment of enterprise-level data warehousing solutions. · Stay current with Azure cloud technologies and Python ecosystem updates to adopt best practices and emerging tools. Mandatory skill sets: · Strong Python programming skills (Must-Have) – advanced scripting, automation, and cloud SDK experience · Strong SQL skills (Must-Have) · Azure Databricks (Must-Have) · Azure Data Factory · Azure Blob Storage / Azure Data Lake Storage · Apache Spark (hands-on experience) · Data modeling (Normalization & Denormalization) · Data warehousing and BI tools integration · Git (Version Control) · Building scalable ETL pipelines Preferred skill sets (Good to Have): · Understanding of OLTP and OLAP environments · Experience with Kafka and Hadoop · Azure Synapse Analytics · Azure DevOps for CI/CD integration · Agile delivery methodologies Years of experience required: · 6+ years of overall experience in cloud engineering or data engineering roles, with at least 2-3 years of hands-on experience with Azure cloud services. · Proven track record of strong Python development with at least 2-3 years of hands-on experience. Education qualification: BE/B.Tech/MBA/MCA
Posted 3 days ago
5.0 years
0 Lacs
India
On-site
Job Title: IBM TM1 Professional Location: PAN India Job Type: Hybrid About the Role: We are seeking an experienced IBM TM1 Professional to join our client's team and contribute to the design, development, and support of their financial planning, budgeting, and forecasting solutions using IBM TM1 (also known as IBM Planning Analytics). The ideal candidate will have hands-on experience in IBM TM1 development, data modeling, and performance tuning to deliver high-quality financial planning solutions to our clients. Key Responsibilities: Design, develop, and maintain IBM TM1 cubes, models, and processes for budgeting, forecasting, and reporting. Work with business stakeholders to gather requirements and translate them into effective TM1 solutions. Create and maintain complex TM1 rules, TI (TurboIntegrator) processes, and active forms. Integrate TM1 with external systems (e.g., ERP, CRM, data warehouses) for data exchange. Optimize the performance of TM1 models and ensure high availability and scalability. Provide troubleshooting and issue resolution for TM1-related problems. Create and maintain detailed documentation for TM1 models, processes, and reports. Train and mentor junior team members and provide technical guidance to users. Conduct performance testing and capacity planning to ensure optimal TM1 system performance. Collaborate with business and IT teams to drive continuous improvement in planning and reporting processes. Required Skills and Experience: Strong 5+ years of experience with IBM TM1 (Planning Analytics), including cube design, rule writing, and Turbo Integrator scripting. Proficiency in TM1 development, including creating complex reports and integrating with other tools. Understanding of financial planning processes and experience working with business stakeholders in a financial environment. Knowledge of OLAP concepts, multidimensional data models, and performance tuning for TM1 environments. Experience in managing TM1 environments, including backups, upgrades, and troubleshooting. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Desired Skills: Experience with IBM Planning Analytics Workspace (PAW) and/or Planning Analytics for Excel (PAX). Knowledge of other BI tools (e.g., Tableau, Power BI, Cognos). Familiarity with SQL and relational database concepts. Experience with cloud-based TM1 deployments (e.g., IBM Cloud). Certification in IBM TM1/Planning Analytics is a plus.
Posted 3 days ago
2.0 - 6.0 years
14 - 15 Lacs
Hyderabad
Work from Office
Career Category Engineering Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas Oncology, Inflammation, General Medicine, and Rare Disease we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Data Engineer What you will do Let s do this. Let s change the world. In this vital role we seek a skilled Data Engineer to build and optimize our data infrastructure. As a key contributor, you will collaborate closely with cross-functional teams to design and implement robust data pipelines that efficiently extract, transform, and load data into our AWS-based data lake and data warehouse. Your expertise will be instrumental in empowering data-driven decision making through advanced analytics and predictive modeling. Roles & Responsibilities: Building and optimizing data pipelines, data warehouses, and data lakes on the AWS and Databricks platforms. Managing and maintaining the AWS and Databricks environments. Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring. Maintain system uptime and optimal performance Working closely with cross-functional teams to understand business requirements and translate them into technical solutions. Exploring and implementing new tools and technologies to enhance ETL platform performance. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor s degree and 2 to 6 years. Functional Skills: Must-Have Skills: Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and columnar data stores. Proven ability to optimize query performance on big data platforms. Proficient in leveraging Python, PySpark, and Airflow to build scalable and efficient data ingestion, transformation, and loading processes. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc. ), CI/CD (Jenkins, Maven etc. ), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers. amgen. com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .
Posted 3 days ago
8.0 - 13.0 years
13 - 17 Lacs
Bengaluru
Work from Office
About Netskope Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter @Netskope . As a Staff Engineer on the Data Engineering Team you ll be working on some of the hardest problems in the field of Data, Cloud and Security with a mission to achieve the highest standards of customer success. You will be building blocks of technology that will define Netskope s future. You will leverage open source Technologies around OLAP, OLTP, Streaming, Big Data and ML models. You will help design, and build an end-to-end system to manage the data and infrastructure used to improve security insights for our global customer base. You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Conceiving and building services used by Netskope products to validate, transform, load and perform analytics of large amounts of data using distributed systems with cloud scale and reliability. Helping other teams architect their applications using services from the Data team wile using best practices and sound designs. Evaluating many open source technologies to find the best fit for our needs, and contributing to some of them. Working with the Application Development and Product Management teams to scale their underlying services Providing easy-to-use analytics of usage patterns, anticipating capacity issues and helping with long term planning Learning about and designing large-scale, reliable enterprise services. Working with great people in a fun, collaborative environment. Creating scalable data mining and data analytics frameworks using cutting edge tools and techniques Required skills and experience 8+ years of industry experience building highly scalable distributed Data systems Programming experience in Python, Java or Golang Excellent data structure and algorithm skills Proven good development practices like automated testing, measuring code coverage. Proven experience developing complex Data Platforms and Solutions using Technologies like Kafka, Kubernetes, MySql, Hadoop, Big Query and other open source databases Experience designing and implementing large, fault-tolerant and distributed systems around columnar data stores. Excellent written and verbal communication skills Bonus points for contributions to the open source community Education BSCS or equivalent required, MSCS or equivalent strongly preferred #LI-SK3
Posted 3 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Azure Data Engineer Immediate starters required Hyderabad, TG & Noida, UP Timing : 5pm to 2am Should Have Expert with Azure Cloud Development infrastructure Expert with Designing Azure DataLake Storage (Gen2), Azure SQL Server, Azure Data Factory, Azure Logic App, Azure Analysis Services, Automation accounts, PowerShell, JSON and integrations with Azure resources. Expert in understanding Data warehouse and Data mart models to implement reporting layer designs for enterprise self-service reporting In-depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework Expert with SQL queries, SQL Server Reporting Services (SSRS) and SQL Server Integration Services (SSIS) Expert with Azure Data Factory working on various datasources and targets Expert SQL, T-SQL and PL/SQL knowledge against a variety of databases like SQL Server, Oracle, Hyperion and Cache. Expert with ETL tools like Kettle or Data stage is a plus Experience with Hyperion and Kronos is a plus Expertise Oracle and EBS with a concentration on HR Should feel comfortable with relational design with high transaction databases Experience with Jira, Confluence, EasyVista or similar application preferred. Ability to adapt to new tools Must be able to work independently as well as with a team. Motivated self-directed, but with the ability to take direction from others. Excellent Verbal and Written communication skills.
Posted 3 days ago
10.0 - 14.0 years
15 - 22 Lacs
Gurugram
Work from Office
ZS Master Data Management Team has an extensive track record of completing over 1000 global projects and partnering with 15 of the top 20 Global Pharma organizations. They specialize in various MDM domains, offering end-to-end project implementation, change management, and data stewardship support. Their services encompass MDM strategy consulting, implementation for key entities (e.g., HCP, HCO, Employee, Payer, Product, Patient, Affiliations), and operational support including KTLO and Data Stewardship. With 50+ MDM implementations and Change Management programs annually for Life Sciences clients, the team has developed valuable assets like MDM libraries and pre-built accelerators. Strategic partnerships with leading platform vendors (Reltio, Informatica, Veeva, Semarchy etc) and collaborations with 18+ data vendors and technology providers further enhance their capabilities. You as Business Technology Solutions Manager will take ownership of one or more client delivery at a cross office level encompassing the area of digital experience transformation. The successful candidate will work closely with ZS Technology leadership and be responsible for building and managing client relationships, generating new business engagements, and providing thought leadership in the Digital Area. What Youll Do Lead the delivery process right from discovery/ POC to managing operations, across 3-4 client engagements helping to deliver world-class MDM solutions Ownership to ensure the proposed design/ architecture, deliverables meets the client expectation and solves the business problem with high degree of quality; Partner with Senior Leadership team and assist in project management responsibility i.e. Project planning, staffing management, people growth, etc.; Develop and implement master data management strategies and processes to maintain high-quality master data across the organization. Design and manage data governance frameworks, including data quality standards, policies, and procedures. Outlook for continuous improvement, innovation and provide necessary mentorship and guidance to the team; Liaison with Staffing partner, HR business partners for team building/ planning; Lead efforts for building POV on new technology or problem solving, Innovation to build firm intellectual capital: Actively lead unstructured problem solving to design and build complex solutions, tune to meet expected performance and functional requirements; Stay current with industry trends and emerging technologies in master data management and data governance. What Youll Bring: Bachelor's/Master's degree with specialization in Computer Science, MIS, IT or other computer related disciplines; 10-14 years of relevant consulting-industry experience (Preferably Healthcare bad Life Science) working on medium-large scale MDM solution delivery engagements: 5+ years of hands-on experience on designing, implementation MDM services capabilities using tools such as Informatica MDM, Reltio etc Strong understanding of data management principles, including data modeling, data quality, and metadata management. Strong understanding of various cloud based data management (ETL Tools) platforms such as AWS, Azure, Snowflake etc.,; Experience in designing and driving delivery of mid-large-scale solutions on Cloud platforms; Experience with ETL design and development, and (OLAP) tools to support business applications Additional Skills Ability to manage a virtual global team environment that contributes to the overall timely delivery of multiple projects; Knowledge of current data modeling, and data warehouse concepts, issues, practices, methodologies, and trends in the Business Intelligence domain; Experience with analyzing and troubleshooting the interaction between databases, operating systems, and applications; Significant supervisory, coaching and hands-on project management skills; Willingness to travel to other global offices as needed to work with client or other internal project teams.
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
With the increasing demand for data analysis and business intelligence, OLAP (Online Analytical Processing) jobs have become popular in India. OLAP professionals are responsible for designing, building, and maintaining OLAP databases to support data analysis and reporting activities for organizations. If you are looking to pursue a career in OLAP in India, here is a comprehensive guide to help you navigate the job market.
These cities are known for having a high concentration of IT companies and organizations that require OLAP professionals.
The average salary range for OLAP professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 12 lakhs per annum.
Career progression in OLAP typically follows a trajectory from Junior Developer to Senior Developer, and then to a Tech Lead role. As professionals gain experience and expertise in OLAP technologies, they may also explore roles such as Data Analyst, Business Intelligence Developer, or Database Administrator.
In addition to OLAP expertise, professionals in this field are often expected to have knowledge of SQL, data modeling, ETL (Extract, Transform, Load) processes, data warehousing concepts, and data visualization tools such as Tableau or Power BI.
As you prepare for OLAP job interviews in India, make sure to hone your technical skills, brush up on industry trends, and showcase your problem-solving abilities. With the right preparation and confidence, you can successfully land a rewarding career in OLAP in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough