Jobs
Interviews

954 Olap Jobs - Page 34

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

We are seeking an experienced SQL Developer with expertise in SQL Server Analysis Services (SSAS) and AWS to join our growing team. The successful candidate will be responsible for designing, developing, and maintaining SQL Server-based OLAP cubes and SSAS models for business intelligence purposes. You will work with multiple data sources, ensuring data integration, optimization, and performance of the reporting models. This role offers an exciting opportunity to work in a hybrid work environment, collaborate with cross-functional teams, and

Posted 2 months ago

Apply

5.0 - 7.0 years

7 - 11 Lacs

Pune

Work from Office

: We are seeking a highly skilled and experienced MSTR (MicroStrategy) Developer to join our Business Intelligence team. In this role, you will be responsible for the design, development, implementation, and maintenance of robust and scalable BI solutions using the MicroStrategy platform. Your primary focus will be on leveraging your deep understanding of MicroStrategy architecture and strong SQL skills to deliver insightful and actionable data to our stakeholders. This is an excellent opportunity to contribute to critical business decisions by providing high-quality BI solutions. Responsibilities - Design, develop, and deploy MicroStrategy objects including reports, dashboards, cubes (Intelligent Cubes, OLAP Cubes), documents, and visualizations. - Utilize various MicroStrategy features and functionalities such as Freeform SQL, Query Builder, MDX connectivity, and data blending. - Optimize MicroStrategy schema objects (attributes, facts, hierarchies) for performance and usability. - Implement security models within MicroStrategy, including user and group management, object-level security, and data-level security. - Perform performance tuning and optimization of MicroStrategy reports and dashboards. - Participate in the administration and maintenance of the MicroStrategy environment, including metadata management, project configuration, and user support. - Troubleshoot and resolve issues related to MicroStrategy reports, dashboards, and the overall platform. - Write complex and efficient SQL queries to extract, transform, and load data from various data sources. - Understand database schema design and data modeling principles. - Optimize SQL queries for performance within the MicroStrategy environment. - Work with different database platforms (e.g., Oracle, SQL Server, Teradata, Snowflake) and understand their specific SQL dialects. - Develop and maintain database views and stored procedures to support MicroStrategy development. - Collaborate with business analysts and end-users to understand their reporting and analytical requirements. - Translate business requirements into technical specifications for MicroStrategy development. - Participate in the design and prototyping of BI solutions. - Develop and execute unit tests and integration tests for MicroStrategy objects. - Participate in user acceptance testing (UAT) and provide support to end-users during the testing phase. - Ensure the accuracy and reliability of data presented in MicroStrategy reports and dashboards. - Create and maintain technical documentation for MicroStrategy solutions, including design documents, user guides, and deployment instructions. - Provide training and support to end-users on how to effectively use MicroStrategy reports and dashboards. - Adhere to MicroStrategy best practices and development standards. - Stay updated with the latest MicroStrategy features and functionalities. - Proactively identify opportunities to improve existing MicroStrategy solutions and processes. Required Skills and Expertise - Strong proficiency in MicroStrategy development (5+ years of hands-on experience is essential). This includes a deep understanding of the MicroStrategy architecture, object creation, report development, dashboard design, and administration. - Excellent SQL skills (5+ years of experience writing complex queries, optimizing performance, and working with various database systems). - Experience in data modeling and understanding of dimensional modeling concepts (e.g., star schema, snowflake schema). - Solid understanding of BI concepts, data warehousing principles, and ETL processes. - Experience in performance tuning and optimization of MicroStrategy reports and SQL queries. - Ability to gather and analyze business requirements and translate them into technical specifications. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills, with the ability to work effectively with both technical and business stakeholders. - Experience with version control systems (e.g., Git). - Ability to work independently and as part of a team.

Posted 2 months ago

Apply

5.0 - 8.0 years

8 - 12 Lacs

Bengaluru

Work from Office

: As a SaaS Developer focused on SSAS and OLAP, you will play a critical role in our data warehousing and business intelligence initiatives. You will work closely with data engineers, business analysts, and other stakeholders to ensure the delivery of accurate and timely data insights. Your expertise in SSAS development, performance optimization, and data integration will be essential to your success. Responsibilities : - Design, develop, and maintain SQL Server Analysis Services (SSAS) models (multidimensional and tabular). - Create and manage OLAP cubes to support business intelligence reporting and analytics. - Implement best practices for data modeling and cube design. - Optimize the performance of SSAS solutions for efficient query processing and data retrieval. - Tune SSAS models and cubes to ensure optimal performance. - Identify and resolve performance bottlenecks. - Integrate data from various sources (relational databases, flat files, APIs) into SQL Server databases and SSAS models. - Develop and implement ETL (Extract, Transform, Load) processes for data integration. - Ensure data quality and consistency across integrated data sources. - Support the development of business intelligence reports and dashboards. - Collaborate with business analysts to understand reporting requirements and translate them into SSAS solutions. - Provide technical support and troubleshooting for SSAS-related issues. - Preferably have knowledge of AWS S3 and SQL Server PolyBase for data integration and cloud-based data warehousing. - Integrate data from AWS S3 into SSAS models using PolyBase or other appropriate methods. Required Skills & Qualifications : Experience : - 5-8 years of experience as a SQL Developer with a focus on SSAS and OLAP. - Proven experience in designing and developing multidimensional and tabular SSAS models. Technical Skills : - Strong expertise in SQL Server Analysis Services (SSAS) and OLAP cube development. - Proficiency in writing MDX and DAX queries. - Experience with data modeling and database design. - Strong understanding of ETL processes and data integration techniques. - Experience with SQL Server databases and related tools. - Preferably knowledge of AWS S3 and SQL Server PolyBase.

Posted 2 months ago

Apply

20.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Over the past 20 years Amazon has earned the trust of over 300 million customers worldwide by providing unprecedented convenience, selection and value on Amazon.com. By deploying Amazon Pay’s products and services, merchants make it easy for these millions of customers to safely purchase from their third party sites using the information already stored in their Amazon account. In this role, you will lead Data Engineering efforts to drive automation for Amazon Pay organization. You will be part of the data engineering team that will envision, build and deliver high-performance, and fault-tolerant data pipeliens. As a Data Engineer, you will be working with cross-functional partners from Science, Product, SDEs, Operations and leadership to translate raw data into actionable insights for stakeholders, empowering them to make data-driven decisions. Key job responsibilities Design, implement, and support a platform providing ad-hoc access to large data sets Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift, and OLAP technologies Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2992057 Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Your contributions to organisation's growth: Maintain & develop data platforms based on Microsoft Fabric for Business Intelligence & Databricks for real-time data analytics. Design, implement and maintain standardized production-grade data pipelines using modern data transformation processes and workflows for SAP, MS Dynamics, on-premise or cloud. Develop an enterprise-scale cloud-based Data Lake for business intelligence solutions. Translate business and customer needs into data collection, preparation and processing requirements. Optimize the performance of algorithms developed by Data Scientists. General administration and monitoring of the data platforms. Competencies: Working with structured & unstructured data. Experienced in various database technologies (RDBMS, OLAP, Timeseries, etc.). Solid programming skills (Python, SQL, Scala is a plus). Experience in Microsoft Fabric (incl. Warehouse, Lakehouse, Data Factory, DataFlow Gen2, Semantic Model) and/or Databricks (Spark). Proficient in PowerBI. Experienced working with APIs. Proficient in security best practices. Data centered Azure know-how is a plus (Storage, Networking, Security, Billing). Expertise you have to bring in along with; Bachelor or Master degree in business informatics, computer science, or equal. A background in software engineering (e.g., agile programming, project organization) and experience with human centered design would be desirable. Extensive experience in handling large data sets. Experience working at least 5 years as a data engineer, preferably in an industrial company. Analytical problem-solving skills and the ability to assimilate complex information. Programming experience in modern data-oriented languages (SQL, Python). Experience with Apache Spark and DevOps. Proven ability to synthesize complex data advanced technical skills related to data modelling, data mining, database design and performance tuning. English language proficiency. Special requirements: High quality mindset paired with strong customer orientation, critical thinking, and attention to detail. Understanding of data processing at scale Influence without authority. Willingness to acquire additional system/technical knowledge as needed. Problem solver. Experience to work in an international organization and in multi-cultural teams. Proactive, creative and innovative. Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

Remote

At NICE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. What’s the role all about? As a BI Developer, you’ll be a key contributor to developing Reports in a multi-region, multi-tenant SaaS product. You’ll collaborate with the core R&D team to build high-performance Reports to serve the use cases of several applications in the suite. How will you make an impact? Take ownership of the software development lifecycle, including design, development, unit testing, and deployment, working closely with QA teams. Ensure that architectural concepts are consistently implemented across the product. Act as a product expert within R&D, understanding the product’s requirements and its market positioning. Work closely with cross-functional teams (Product Managers, Sales, Customer Support, and Services) to ensure successful product delivery. Design and build Reports for given requirements. Create design documents, test cases for the reports Develop SQL to address the adhoc report requirements, conduct analyses Create visualizations and reports as per the requirements Execute unit testing, functional & performance testing and document the results Conduct peer reviews and ensure quality is met at all stages Have you got what it takes? Bachelor/Master of Engineering Degree in Computer Science, Electronic Engineering or equivalent from reputed institute 2-4 years of BI report development experience Expertise in SQL & any cloud-based databases. Would be able to work with any DB to write SQL for any business needs. Experience in any BI tools like Tableau, Power BI, MicroStrategy etc.. Experience working in enterprise Data warehouse/ Data Lake system Strong knowledge of Analytical Data base and schemas Development experience building solutions that leverage SQL and NoSQL databases. Experience/Knowledge of Snowflake an advantage. In-depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework Experience working in functional testing, Performance testing etc.. Experience with public cloud infrastructure and technologies such as AWS/Azure/GCP etc Experience working in Continuous Integration and Delivery practices using industry standard tools such as Jenkins Experience working in an Agile methodology development environment and using work item management tools like JIRA What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager Role Type: Individual Contributor About NICE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NICE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NICE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NICE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less

Posted 2 months ago

Apply

5.0 - 10.0 years

30 - 35 Lacs

Hyderabad

Work from Office

Job Title: Data Modeler Experience: 5+ Years Location: Hyderabad (WFO). Roles and Responsibilities: Experience in data modelling designing, implementing, and maintaining data models to support data quality, performance, and scalability. Proven experience as a Data Modeler and worked with data analysts, data architects and business stakeholders to ensure data models are aligned to business requirements. Expertise in Azure, Databricks, Data warehousing, ERWIN, and Supply chain background is required. Strong knowledge of data modelling principles and techniques (e.g., ERD, UML). Proficiency with data modelling tools (e.g., ER/Studio, Erwin, IBM Data Architect). Experience with relational databases (e.g., SQL Server, Oracle, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Solid understanding of data warehousing, ETL processes, and data integration. Able to create and maintain Source to Target mapping [STTM] Document , Bus Matrix Document , etc. Realtime experience working in OLTP & OLAP Database modelling. Additional: Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus. Excellent analytical, problem-solving, and communication skills. Ability to work effectively in a collaborative, fast-paced environment.

Posted 2 months ago

Apply

2.0 - 4.0 years

6 - 11 Lacs

Pune

Hybrid

What’s the role all about? As a BI Developer, you’ll be a key contributor to developing Reports in a multi-region, multi-tenant SaaS product. You’ll collaborate with the core R&D team to build high-performance Reports to serve the use cases of several applications in the suite. How will you make an impact? Take ownership of the software development lifecycle, including design, development, unit testing, and deployment, working closely with QA teams. Ensure that architectural concepts are consistently implemented across the product. Act as a product expert within R&D, understanding the product’s requirements and its market positioning. Work closely with cross-functional teams (Product Managers, Sales, Customer Support, and Services) to ensure successful product delivery. Design and build Reports for given requirements. Create design documents, test cases for the reports Develop SQL to address the adhoc report requirements, conduct analyses Create visualizations and reports as per the requirements Execute unit testing, functional & performance testing and document the results Conduct peer reviews and ensure quality is met at all stages Have you got what it takes? Bachelor/Master of Engineering Degree in Computer Science, Electronic Engineering or equivalent from reputed institute 2-4 years of BI report development experience Expertise in SQL & any cloud-based databases. Would be able to work with any DB to write SQL for any business needs. Experience in any BI tools like Tableau, Power BI, MicroStrategy etc.. Experience working in enterprise Data warehouse/ Data Lake system Strong knowledge of Analytical Data base and schemas Development experience building solutions that leverage SQL and NoSQL databases. Experience/Knowledge of Snowflake an advantage. In-depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework Experience working in functional testing, Performance testing etc.. Experience with public cloud infrastructure and technologies such as AWS/Azure/GCP etc Experience working in Continuous Integration and Delivery practices using industry standard tools such as Jenkins Experience working in an Agile methodology development environment and using work item management tools like JIRA What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager Role Type: Individual Contributor

Posted 2 months ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

TCS HIRING!! ROLE: AWS Data architect LOCATION: HYDERABAD YEAR OF EXP: 8 + YEARS Data Architect: Must have: Relational SQL/ Caching expertise – Deep knowledge of Amazon Aurora PostgreSQL, ElastiCache etc.. Data modeling – Experience in OLTP and OLAP schemas, normalization, denormalization, indexing, and partitioning. Schema design & migration – Defining best practices for schema evolution when migrating from SQL Server to PostgreSQL. Data governance – Designing data lifecycle policies, archival strategies, and regulatory compliance frameworks. AWS Glue & AWS DMS – Leading data migration strategies to Aurora PostgreSQL. ETL & Data Pipelines – Expertise in Extract, Transform, Load (ETL) workflows . Glue jobs features and event-driven architectures. Data transformation & mapping – PostgreSQL PL/pgSQL migration / transformation expertise while ensuring data integrity. Cross-platform data integration – Connecting cloud and on-premises / other cloud data sources. AWS Data Services – Strong experience in S3, Glue, Lambda, Redshift, Athena, and Kinesis. Infrastructure as Code (IaC) – Using Terraform, CloudFormation, or AWS CDK for database provisioning. Security & Compliance – Implementing IAM, encryption (AWS KMS), access control policies, and compliance frameworks (eg. GDPR ,PII). Query tuning & indexing strategies – Optimizing queries for high performance. Capacity planning & scaling – Ensuring high availability, failover mechanisms, and auto-scaling strategies. Data partitioning & storage optimization – Designing cost-efficient hot/cold data storage policies. Should have experience with setting up the AWS architecture as per the project requirements Good to have: Data Warehousing – Expertise in Amazon Redshift, Snowflake, or BigQuery. Big Data Processing – Familiarity with Apache Spark, EMR, Hadoop, or Kinesis. Data Lakes & Analytics – Experience in AWS Lake Formation, Glue Catalog, and Athena. Machine Learning Pipelines – Understanding of SageMaker, BedRock etc. for AI-driven analytics. CI/CD for Data Pipelines – Knowledge of AWS CodePipeline, Jenkins, or GitHub Actions. Serverless Data Architectures – Experience with event-driven systems (SNS, SQS, Step Functions). Show more Show less

Posted 2 months ago

Apply

15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position: DAML Head - Solution Architect / Technical Delivery Experience: 15+ Years Location: Noida Job Summary: The DAML Head - Solution Architect / Technical Delivery will be responsible for leading the design and delivery of advanced data management and analytics solutions. This role involves overseeing the creation of modern data warehouses, business intelligence systems, and cutting-edge analytics platforms, with a strong emphasis on AI/ML and Generative AI technologies. The ideal candidate will possess significant experience in Big Data, program management, and senior-level stakeholder engagement. Key Responsibilities Architectural Leadership: Lead the architectural design and development of multi-tenant modern data warehouses, business intelligence systems, and analytics platforms, including AI/ML and Generative AI components. Ensure the platform’s security, data isolation, quality, integrity, extensibility, adaptability, scalability, availability, and understandability. Big Data and AI/ML Delivery: Oversee the delivery of Big Data and AI/ML projects, ensuring alignment with architectural standards and business objectives. Solution Development: Architect scalable, performance-oriented solutions using Big Data technologies and traditional ETL tools, incorporating AI/ML and Generative AI technologies where applicable. Manage logical and physical data models for data warehousing (DW) and OLAP systems. Technology Evaluation: Lead the evaluation and selection of technology products to achieve strategic business intelligence and data management goals, including AI/ML and Generative AI technologies. Stakeholder Engagement: Facilitate high-level discussions with stakeholders, including CXOs and tech leaders within customer and AWS ecosystems, to refine software requirements and provide guidance on technical components, frameworks, and interfaces. Program and People Management: Demonstrate strong program management skills, overseeing multiple projects and ensuring timely delivery. Manage and mentor team members, including junior developers and team leads, ensuring adherence to best practices and fostering professional growth. Documentation and Communication: Develop and maintain comprehensive technical design documentation related to data warehouse architecture and systems. Communicate effectively with cross-functional teams to resolve issues, manage changes in scope, and ensure successful project execution. Infrastructure Planning: Assess data volumes and customer reporting SLAs, and provide recommendations for infrastructure sizing and orchestration solutions. Skill Requirements Experience: Minimum of 15 years in data management and analytics roles, with substantial experience in Big Data solutions architecture. At least 10 years of experience in Big Data delivery roles, including hands-on experience with AI/ML and Generative AI technologies. Technical Expertise: Proficiency with Hadoop distributions (e.g., Hortonworks, Cloudera), and related technologies (e.g., Kafka, Spark, Cloud Data Flow, Pig, Hive, Sqoop, Oozie). Experience with RDBMS (e.g., MySQL, Oracle), NoSQL databases, and ETL/ELT tools (e.g., Informatica Power Center, Scoop). Experience with large-scale cluster installation and deployment. Analytical and Management Skills: Strong analytical and problem-solving skills, with the ability to develop multiple solution options. Proven program management capabilities, with a track record of managing complex projects and leading cross-functional teams. Knowledge: Deep understanding of Data Engineering, Data Management, Data Science, and AI/ML principles, including Generative AI. Familiarity with design and architectural patterns, as well as cloud-based deployment models. Knowledge of Big Data security concepts and tools, including Kerberos, Ranger, and Knox. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Experience - 5+ Years Location: Gurgaon (Hybrid) Budget: 15-18 LPA Roles and Responsibilities ‍Formulate automated reports and dashboards using Power BI and other reporting tools. Understand business requirements to set functional specifications for reporting applications. You should be familiar with SSRS and TSQL, Power Query, MDX, PowerBI, and DAX are just a few of the tools and systems on the MS SQL Server BI Stack. Exhibit a foundational understanding of database concepts such relational database architecture, multidimensional database design, and more Design data models that transform raw data into insightful knowledge by understanding business requirements in the context of BI. Develop technical specifications from business needs, and choose a deadline for work completion. Make charts and data documentation thamore.cludes descriptions of the techniques, parameters, models, and relationships. Developing Power BI desktop to create dashboards, KPI scorecards, and visual reports. Establish row-level security on data and comprehend Power BI's application security layer models. Examine, comprehend, and study business needs as they relate to business intelligence. Design and map data models to transform raw data into insightful information. Create dynamic and eye-catching dashboards and reports using Power BI. Make necessary tactical and technological adjustments to enhance current business intelligence systems Integrate data, alter data, and connect to data sources for business intelligence. ‍ ‍ Requirements and Skills ‍Extremely good communication skills are necessary to effectively explain the requirements between both internal teams and client teams. Exceptional analytical thinking skills for converting data into illuminating reports and reports. BS in computer science or information system along with work experience in a related field knowledge of data warehousing, data gateway, and data preparation projects Working knowledge of Power BI, SSAS, SSRS, and SSIS components of the Microsoft Business Intelligence Stack Articulating, representing, and analyzing solutions with the team while documenting, creating, and modeling them Familiarity with the tools and technologies used by the Microsoft SQL Server BI Stack, including SSRS and TSQL, Power Query, MDX, PowerBI, and DAX. Knowledge of executing DAX queries on the Power BI desktop Comprehensive understanding of data modeling, administration, and visualization Capacity to perform in an atmosphere where agility and continual development are prioritized Detailed knowledge and understanding of database management systems, OLAP, and the ETL (Extract, Transform, Load) framework Awareness of BI technologies (e.g., Microsoft Power BI, Oracle BI) Expertise of SQL queries, SSRS, and SQL Server Integration Services (SSIS) NOTE: Staffing & Recruitment Companies are advised not to contact us. Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Rippling: Rippling gives businesses one place to run HR, IT, and Finance. It brings together all of the workforce systems that are normally scattered across a company, like payroll, expenses, benefits, and computers. For the first time ever, you can manage and automate every part of the employee lifecycle in a single system. Take onboarding, for example. With Rippling, you can hire a new employee anywhere in the world and set up their payroll, corporate card, computer, benefits, and even third-party apps like Slack and Microsoft 365—all within 90 seconds. Based in San Francisco, CA, Rippling has raised $1.2B from the world’s top investors—including Kleiner Perkins, Founders Fund, Sequoia, Greenoaks, and Bedrock—and was named one of America's best startup employers by Forbes. We prioritize candidate safety. Please be aware that official communication will only be sent from @Rippling.com addresses. About the Role: The Data Platform team works on building the blocks that are used by other teams at Rippling to create advanced HR applications at lightning fast speed. At the core of our technological aspirations lies this Team, a group dedicated to pushing the boundaries of what's possible with data. We architect high-performance, scalable systems that power the next generation of data products - ranging from reports, analytics, customizable workflows, search, and many new sets of products and capabilities to help customers manage and get unprecedented value from their business data. This is a unique opportunity to work on both product and platform layers at the same time. We obsess over the scalability and extensibility of platform solutions, ensuring that solutions will meet the needs across the breadth of Rippling's product suite, along with the applications of tomorrow. You won't just be crafting features; you'll be shaping the future of business data management. What You'll Do: Develop high-quality software with attention to detail using tech stacks like Python, MongoDB, CDC, and Kafka Leverage big data technologies like Apache Presto, Apache Pinot, Flink, and Airflow Build OLAP stack and data pipelines in support of Reporting products Build custom programming languages within the Rippling Platform Create data platforms, data lakes, and data ingestion systems that work at scale Lead mission-critical projects and deliver data ingestion capabilities end-to-end with high-quality Have clear ownership of one or many products, APIs, or platform spaces Build and grow your engineering skills in different challenging areas and solve hard technical problems Influence architecture, technology selections, and trends of the whole company Qualifications: 7+ years of experience in software development, preferably in fast-paced, dynamic environments. Solid understanding of CS fundamentals, architectures, and design patterns. Proven track record in building large-scale applications, APIs, and developer tools. Excellent at cross-functional collaboration, able to articulate technical concepts to non-technical partners. You thrive in a product-focused environment and are passionate about making an impact on customer experience. Bonus Points: for contributing to open-source projects (Apache Iceberg, Parquet, Spark, Hive, Flink, Delta Lake, Presto, Trino, Avro) Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Skill: Data Modeler Key Responsibility Hands-on data modelling for OLTP and OLAP systems In-Depth knowledge of Conceptual, Logical and Physical data modelling Strong understanding of Indexing, partitioning, data sharding with practical experience of having done the same Strong understanding of variables impacting database performance for near-real time reporting and application interaction. Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin Good understanding of GCP databases like AlloyDB, CloudSQL and BigQuery. People with functional knowledge of mutual fund industry will be a plus Should be willing to work from Chennai, office presence is mandatory About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 21,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need. Show more Show less

Posted 2 months ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Key Accountabilities JOB DESCRIPTION The Azure Data Support engineer focuses on data-related tasks in Azure. Manage, monitor, and ensure the security and privacy of data to satisfy business needs. Monitor real time and batch processes to ensure data accuracy. Monitor azure pipelines and troubleshoot where required. Enhance existing pipelines, databricks notebooks as and when required. Involved in development stages of new pipelines as and when required. Troubleshoot pipelines, real time replication jobs and ensuring minimum data lag. Available to work on a shift basis to cover monitoring during weekends. (one weekend out of three) Act as an ambassador for DP World at all times when working; promoting and demonstrating positive behaviours in harmony with DP World’s Principles, values and culture; ensuring the highest level of safety is applied in all activities; understanding and following DP World’s Code of Conduct and Ethics policies Perform other related duties as assigned JOB CONTEXT Responsible for monitoring and enhancing existing data pipelines using Microsoft Stack. Responsible for enhancement of existing data platforms. Experience with Cloud Platforms such as Azure, AWS , Google Cloud etc. Experience with Azure data Lake, Azure datalake Analytics, Azure SQL Database, Azure, Data Bricks and Azure SQL Data warehouse. Good understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors and Tasks. Good understanding of Big Data Hadoop and Yarn architecture along with various Hadoop Demons such as Job Tracker, Task Tracker, Name Node, Data Node, Resource/Cluster Manager, and Kafka (distributed stream-processing) . Experience in Database Design and development with Business Intelligence using SQL Server 2014/2016, Integration Services (SSIS), DTS Packages, SQL Server Analysis Services (SSAS), DAX, OLAP Cubes, Star Schema and Snowflake Schema. Monitoring of pipelines in ADF and experience with Azure SQL, Blob storage, Azure SQL Data warehouse. Experience in a support environment working with real time data replication will be a plus. Qualification QUALIFICATIONS, EXPERIENCE AND SKILLS Bachelor/master’s in computer science/IT or equivalent. Azure certifications will be an added advantage (Certification in AZ-900 and/or AZ-204, AZ-303, AZ-304 or AZ-400 , DP200 & DP201). ITIL certification a plus. Experience : 5 - 8 Years Must Have Skills Azure Data lake, Data factory, Azure Databricks Azure SQL database, Azure SQL Datawarehouse. Hadoop ecosystem. Azure analytics services. Programming Python, R, Spark SQL Good To Have Skills MSBI (SSIS, SSAS, SSRS), Oracle, SQL, PL/SQL Data Visualization, Power BI Data Migration Show more Show less

Posted 2 months ago

Apply

8.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Information Date Opened: 05/08/2025 Job Type: Full time Industry: Software Product City: Chennai State/Province: Tamil Nadu Country: India Zip/Postal Code: 600017 Pandois a global leader in supply chain technology, building the world's quickest time-to-value Fulfillment Cloud platform. Pandos Fulfillment Cloud provides manufacturers, retailers, and 3PLs with a single pane of glass to streamline end-to-end purchase order fulfillment and customer order fulfillment to improve service levels, reduce carbon footprint, and bring down costs. As a partner of choice for Fortune 500 enterprises globally, with a presence across APAC, the Middle East, and the US, Pando is recognized as aTechnology Pioneer by the World Economic Forum (WEF), and asone of the fastest growing technology companies by Deloitte. Role As the Senior Lead for AI and Data Warehouse at Pando, you will be responsible for building and scaling the data and AI services team. You will drive the design and implementation of highly scalable, modular, and reusable data pipelines, leveraging big data technologies and low-code implementations. This is a senior leadership position where you will work closely with cross-functional teams to deliver solutions that power advanced analytics, dashboards, and AI-based insights. Key Responsibilities Lead the development of scalable, high-performance data pipelines using PySpark or Big Data ETL pipeline technologies. Drive data modeling efforts for analytics, dashboards, and knowledge graphs. Oversee the implementation of parquet-based data lakes. Work on OLAP databases, ensuring optimal data structure for reporting and querying. Architect and optimize large-scale enterprise big data implementations with a focus on modular and reusable low-code libraries. Collaborate with stakeholders to design and deliver AI and DWH solutions that align with business needs. Mentor and lead a team of engineers, building out the data and AI services organization. Requirements 8-10 years of experience in big data and AI technologies, with expertise in PySpark or similar Big Data ETL pipeline technologies. Strong proficiency in SQL and OLAP database technologies. Firsthand experience with data modeling for analytics, dashboards, and knowledge graphs. Proven experience with parquet-based data lake implementations. Expertise in building highly scalable, high-volume data pipelines. Experience with modular, reusable, low-code-based implementations. Involvement in large-scale enterprise big data implementations. Initiative-taker with strong motivation and the ability to lead a growing team. Preferred Experience leading a team or building out a new department. Experience with cloud-based data platforms and AI services. Familiarity with supply chain technology or fulfilment platforms is a plus. Join us at Pando and lead the transformation of our AI and data services, delivering innovative solutions for global enterprises! I'm interested Locations: Chennai, India | Posted on: 05/08/2025 Show more Show less

Posted 2 months ago

Apply

4.0 - 9.0 years

10 - 12 Lacs

Pune

Work from Office

A Snapshot of Your Days Your role as a Senior JEDOX Developer is to work daily with global business users who submit tickets via SharePoint or Mailbox. You will also coordinate and work with the appropriate IT development and middleware Teams to find a solution that meets the agreed operation Level Agreement and fix it within the agreed Service Level Agreement. Besides that, you will take part in the monthly closing process where you will coordinate with end users regarding the data entered in the system and verify the same. You will also join the sprint Development meeting to understand and keep up with the ongoing developments. Work closely with collaborators and senior management, expand your network and prepare yourself for future global roles at Siemens Energy. Your opportunities for personal growth collaborate with people from different countries, cultures, and backgrounds. work without supervision work innovatively. How You ll Make an Impact / Responsibilities Lead the design, development, and implementation of data pipelines and ETL workflows. Manage and optimize workflows to ensure reliable data processing and job scheduling. Design and implement data solutions in database. Ability to be creative and proactive with report design and development using little to no documented requirements Collaborate with cross-functional teams to gather requirements and translate them into scalable data architecture and process designs. Fostering a culture of continuous improvement and innovation. Ensure data quality and integrity by implementing standard processes in data governance and validation. Monitor performance, troubleshoot issues, and optimize data systems for efficiency and scalability. Stay abreast of industry trends and emerging technologies to ensure continuous improvement of the data engineering practices. What You Bring / Skills, Capabilities You should be an experienced (6+) IT professional with your graduation in Engineering or other equivalent qualification (MCA). 4+ years of relevant work experience in developing maintaining ETL workflows. 4+ years of relevant work experience in data analytics, reporting tool like Power BI, Tableau, SAC. 4+ years of relevant work experience in SNOWFLAKE or any cloud database with proven knowledge of writing complex SQL queries. Good to have experience in working in EPM tool like JEDOX, ANAPLAN, TM1 Good to have experience in multidimensional database concepts like OLAP, Cube, Dimensions etc. Good to have experience in developing Power Automate workflows. Good to have experience in Excel Formulas like PIVOT, VLOOKUP etc. Ability to learn new software and technologies quickly and adapt to an ambitious and fast-paced environment. Experience collaborating directly with business users and relevant collaborators. About the Team Who is Siemens Energy At Siemens Energy, we are more than just an energy technology company. We meet the growing energy demand across 90+ countries while ensuring our climate is protected. With more than 94,000 dedicated employees, we not only generate electricity for over 16% of the global community, but we re also using our technology to help protect people and the environment. Our global team is committed to making sustainable, reliable, and affordable energy a reality by pushing the boundaries of what is possible. We uphold a 150-year legacy of innovation that encourages our search for people who will support our focus on decarbonization, new technologies, and energy transformation. Discover the ways you can contribute to Siemens Energy:

Posted 2 months ago

Apply

2.0 - 3.0 years

6 - 10 Lacs

Vadodara

Work from Office

Job Description When looking to buy a product, whether it is in a brick and mortar store or online, it can be hard enough to find one that not only has the characteristics you are looking for but is also at a price that you are willing to pay. It can also be especially frustrating when you finally find one, but it is out of stock. Likewise, brands and retailers can have a difficult time getting the visibility they need to ensure you have the most seamless experience as possible in selecting their product. We at Wiser believe that shoppers should have this seamless experience, and we want to do that by providing the brands and retailers the visibility they need to make that belief a reality. Our goal is to solve a messy problem elegantly and cost effectively. Our job is to collect, categorize, and analyze lots of structured and semi-structured data from lots of different places every day (whether it s 20 million+ products from 500+ websites or data collected from over 300,000 brick and mortar stores across the country). We help our customers be more competitive by discovering interesting patterns in this data they can use to their advantage, while being uniquely positioned to be able to do this across both online and instore. We are looking for a lead-level software engineer to lead the charge on a team of like-minded individuals responsible for developing the data architecture that powers our data collection process and analytics platform. If you have a passion for optimization, scaling, and integration challenges, this may be the role for you. What You Will Do Think like our customers - you will work with product and engineering leaders to define data solutions that support customers business practices. Design/develop/extend our data pipeline services and architecture to implement your solutions - you will be collaborating on some of the most important and complex parts of our system that form the foundation for the business value our organization provides Foster team growth - provide mentorship to both junior team members and evangelizing expertise to those on others. Improve the quality of our solutions - help to build enduring trust within our organization and amongst our customers by ensuring high quality standards of the data we manage Own your work - you will take responsibility to shepherd your projects from idea through delivery into production Bring new ideas to the table - some of our best innovations originate within the team Technologies We Use Languages: SQL, Python Infrastructure: AWS, Docker, Kubernetes, Apache Airflow, Apache Spark, Apache Kafka Databases: Snowflake, Trino/Starburst, Redshift, MongoDB, Postgres, MySQL Others: Tableau (as a business intelligence solution) Qualifications Bachelors/Master s degree in Computer Science or relevant technical degree 10+ years of professional software engineering experience Strong proficiency with data languages such as Python and SQL Strong proficiency working with data processing technologies such as Spark, Flink, and Airflow Strong proficiency working of RDMS/NoSQL/Big Data solutions (Postgres, MongoDB, Snowflake, etc.) Solid understanding of streaming solutions such as Kafka, Pulsar, Kinesis/Firehose, etc. Solid understanding of Docker and Kubernetes Solid understanding of ETL/ELT and OLTP/OLAP concepts Solid understanding of columnar/row-oriented data structures (e.g. Parquet, ORC, Avro, etc.) Solid understanding of Apache Iceberg, or other open table formats Proven ability to transform raw unstructured/semi-structured data into structured data in accordance to business requirements Solid understanding of AWS, Linux and infrastructure concepts Proven ability to diagnose and address data abnormalities in systems Proven ability to learn quickly, make pragmatic decisions, and adapt to changing business needs Experience building data warehouses using conformed dimensional models Experience building data lakes and/or leveraging data lake solutions (e.g. Trino, Dremio, Druid, etc.) Experience working with business intelligence solutions (e.g. Tableau, etc.) Experience working with ML/Agentic AI pipelines (e.g. , Langchain, LlamaIndex, etc.) Understands Domain Driven Design concepts and accompanying Microservice Architecture Passion for data, analytics, or machine learning. Focus on value: shipping software that matters to the company and the customer Bonus Points Experience working with vector databases Experience working within a retail or ecommerce environment. Proficiency in other programming languages such as Scala, Java, Golang, etc. Experience working with Apache Arrow and/or other in-memory columnar data technologies Supervisory Responsibility Provide mentorship to team members on adopted patterns and best practices. Organize and lead agile ceremonies such as daily stand-ups, planning, etc

Posted 2 months ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Pune

Work from Office

What s the role all about As a Senior BI Developer, you ll be a key contributor to developing Reports in a multi-region, multi-tenant SaaS product. You ll collaborate with the core RD team to build high-performance Reports to serve the use cases of several applications in the suite. How will you make an impact Take ownership of the software development lifecycle, including design, development, unit testing, and deployment, working closely with QA teams. Ensure that architectural concepts are consistently implemented across the product. Act as a product expert within RD, understanding the product s requirements and its market positioning. Work closely with cross-functional teams (Product Managers, Sales, Customer Support, and Services) to ensure successful product delivery. Have you got what it takes Bachelor/Master of Engineering Degree in Computer Science, Electronic Engineering or equivalent from reputed institute 4-7 years of BI report development experience Expertise in SQL any cloud-based databases. Would be able to work with any DB to write SQL for any business needs. Expertise in any BI tools like Tableau, Power BI, MicroStrategy etc.. Experience working in enterprise Data warehouse/ Data Lake system Strong knowledge of Analytical Data base and schemas Development experience building solutions that leverage SQL and NoSQL databases. Experience/Knowledge of Snowflake an advantage. In-depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework Experience working in functional testing, Performance testing etc.. Experience working in any Performance test script generation - Jmeter, Gatling etc.. Experience working in automating the Testing process for E2E and Regression cases. Experience working in JAVA/ Web services will be added advantage. Experience with public cloud infrastructure and technologies such as AWS/Azure/GCP etc What s in it for you Enjoy NICE-FLEX! Requisition ID: 6632 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 2 months ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Total 6+ years of experience 3 years of experience with leading automation tools and White-box testing (Java APIs) i.e. JUnit 2 years of Software development in Java 2EE. Experience in other automation tools i.e. Selenium, Mercury tools or self created test-harness tool 4 Year College Degree in Computer Science or related field i.e. BE or MCA Good understanding of XML, XSL/XSLT, RDBMS and Unix platforms. Experience in Multi-dimensional (OLAP technology),Data Warehouse and Financial software would be desirable. Motivated individual in learning leading-edge technology and testing complex software Career Level - IC3 Responsibilities Total 6+ years of experience 3 years of experience with leading automation tools and White-box testing (Java APIs) i.e. JUnit 2 years of Software development in Java 2EE. Experience in other automation tools i.e. Selenium, Mercury tools or self created test-harness tool 4 Year College Degree in Computer Science or related field i.e. BE or MCA Good understanding of XML, XSL/XSLT, RDBMS and Unix platforms. Experience in Multi-dimensional (OLAP technology),Data Warehouse and Financial software would be desirable. Motivated individual in learning leading-edge technology and testing complex software About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Design, develop, troubleshoot and debug software programs for databases, applications, tools, networks etc.As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems.Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 7 years of software engineering or related experience.ResponsibilitiesOverview of Product – Oracle AnalyticsBe part of an energetic and challenging team building an enterprise Analytic platform that will allow users to quickly gain insights on their most valuable asset; data. Oracle Analytics is an industry-leading product that empowers entire organizations with a full range of business analytics tools, enterprise ready reporting and engaging, and easy-to-use self-service data visualizations. Our customers are business users that demand a software product that allows easy, fast navigation through the full spectrum of data scale from simple spreadsheets to analyzing enormous volumes of information in enterprise class data warehouses.Oracle Analytics is a comprehensive solution to meet the breadth of all analytics needs. Get the right data, to the right people, at the right time with analytics for everyone in your organization. With built-in security and governance, you can easily share insights and collaborate with your colleagues. By leveraging the cloud, you can scale up or down to suit your needs. The Oracle Analytics Cloud offering is a leading cloud service at Oracle built on Oracle Cloud Infrastructure. It runs with a Generation 2 offering and provides consistent high performance and unmatched governance and security controls.Self-service analytics drive business agility with faster time to insights. You no longer need help from IT to access, prepare, analyze, and collaborate on all your data. Easily create data visualizations with automated chart recommendations and optimize insights by collaborating with colleagues on analyses.Augmented analytics with embedded machine learning throughout the platform drive smarter and better insights. Always on—and always working in the background, machine learning is continuously learning from the data it takes in, making it smarter and more accurate as time goes by. Uncover deeper patterns and predict trends for impactful, unbiased recommendations.On the team we develop, deploy, and support the Oracle Analytics platform helping our customers succeed in their journey to drive business value. You will be working with experts in their field, exploring the latest technologies, you will be challenged while creating features that will be delivered to our customers, asked to be creative, and hopefully have some fun along the way. Members of our team are tasked to take on challenges along all aspect of our product.https://www.oracle.com/solutions/business-analytics Career Level - IC4 Responsibilities As a member of the development team, you will design, code, debug, and deliver innovative analytic features that involve in C++ development with extensive exposure on highly scalable, distributed, multithreaded applications. You will work closely with your peer developers located across the world, including Mexico, India, and the USA. Key responsibilities include: Design, develop, test and deliver new features on a world-class analytics platform suitable for deployment to both the Oracle Cloud and on-premise environments Lead the creation of formal design specifications and coding of complex systems Work closely with the Product Management on product requirements and functionality Build software applications following established coding standards Communicate continually with the project teams, explain progress on the development effort Contribute to continuous improvement by suggesting improvements to user interface, software architecture or recommending new technologies Ensure quality of work through development standards and QA procedures Perform maintenance and enhancements on existing software Key Qualifications: BS/MS in Computer Science or related major Exceptional analytic and problem-solving skills Extensive experience in using, building, debugging multithreaded applications Ability to design large, scalable systems for enterprise customers Solid understanding concurrency, multithreading and memory management Experienced in C++ programming including templates, STL, and object-oriented patterns Interest or experience in database kernel development Understanding of SQL and relational data processing concepts like joins and indexing strategies Experience With Java, Python Or Other Scripting Languages. Experienced in distributed and scalable server-side software development Knowledge in developing, implementing, and optimizing software algorithms Solid knowledge of data structures and operating systems Basic understanding of Agile/Scrum development methodologies Hands-on experience using source control tools such as GIT Strong written and verbal English communication skills Self-motivated and passionate in developing high quality software Strong Team Player Other Qualifications: Knowledge of Business Intelligence or Analytics Familiarity with SQL query optimization and execution Experienced in Big Data technologies (such as Hadoop, Spark) Interest or experience of OLAP, data warehousing or multidimensional databases Familiarity with Cloud services such as OCI, AWS or Azure Knowledge of Terraform/Python About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Manager, Business Analyst – C1 Employment Type: Permanent Location: Chennai Responsible Functions Gen AI: Expertise in leveraging advanced AI technologies to analyze business processes, identify automation and optimization opportunities, and drive data-driven decision-making. Capability to collaborate with stakeholders to translate business needs into AI solutions, ensuring seamless integration and maximizing operational efficiency, productivity, and innovation. Product Vision & Strategy: Perform market analysis to understand market landscape including competitor analysis, trends and customer needs to help define and communicate product vision & strategy aligning with company objectives Stakeholder Engagement: Interact with diversified stakeholders to conduct JAD sessions and use variety of techniques to elicit, elaborate, analyze and validate client requirements. Interact with Business team to conduct product demonstrations, evaluate, prioritize and build new features and functions. Requirements Management: Analyze and develop Business Requirements Document (BRD) and Functional Specification Document (FSD) for client/business reference. Translate Business Requirements to User Stories, Prioritize the Backlog and conduct Scrum Ceremonies for development consumption. Functional Solution Development: Responsible for End-to-End Functional Solutioning. Analyze the Business problem and validate the key business requirements to create a complete picture of workflows and technical requirements fulfilled by existing and proposed software. Identify, define and evaluate potential product solutions, including off-the-shelf and open-source components, and system architecture to ensure that they meet business requirements. Communication & Collaboration: Be a strong interface between Business and Internal stakeholders. Collaborate with development team (including architecture, coding & testing teams) to produce/maintain additional product and project deliverables like technical design, testing & program specifications, additional test scenarios and project plan. Proactively manage expectation regarding roadblocks, in the critical path to help ensure successful delivery of the solution. Business Value: Comprehend the business value of the fundamental solution being developed, assess the fitment within the overall architecture, risks and technical feasibility. Drive business metrics that will help optimize business & also deep dive into data for insights as required Team Management: Manage a small team of Business Analysts, define clear goals and be accountable for the functional solution delivered by the team. Participate in recruitment and building a strong BA team. RFP Support: Participate in Request for Information/Proposal handling and support with responses & solutions to questions or information requested. Client/Business Training: Work with Technical Writers to create training material and handle product/platform training sessions with diversified stakeholders. Essential Functions Multi-disciplinary technologist who enjoys designing, executing and selling Healthcare solutions, and being on the front-line of client communications and selling strategies Deep understanding of the US Healthcare value chain and key impact drivers [Payer and/or Provider] Knowledgeable and cognizant of how data management and science is used to solve organizational problems in the healthcare context Hands-on experience in two (or more) areas of the data and analytics technical domains - Enterprise cloud data warehousing, integration, preparation, and visualization along with artificial intelligence, machine learning, data science, data modeling, data management, and data governance Strong problem solving and analytical skills: Ability to break down a vague business problem into structured data analysis approaches & ability to work with incomplete information and take judgment-driven decisions based on experience. Experience ramping up analytics programs with new clients, including integrating with work of other teams to ensure analytics approach is aligned with operations as well as engage in consultative selling Primary Internal Interactions Review with the Product Manager & AVP for improvements in the product development lifecycle Assessment meeting with VP & above for additional product development features. Manage a small team of business analyst to lead the requirements effort for product development Primary External Interactions Communicate with onshore stakeholder & Executive Team Members. Help the Product Management Group set the product roadmap & help in identifying future sellable product features. Client Interactions to better understands expectations & streamline solutions. If required should be a bridge between the client and the technology teams. Skills Technical Skills Required Skills - SME in US Healthcare with deep Knowledge on Claims & Payments Lifecycle with at least 8 years of experience working with various US Healthcare Payer clients Skills Must Have Excellent understanding of Software Development Life Cycle & Methodologies like Agile Scrum, Waterfall etc. Strong experience in requirements elicitation techniques, functional documentation, stakeholder management, business solutions validation and user walkthroughs. Strong documentation skills to create BRD, FSD, Process Flows, User Stories Strong presentation skills. Good knowledge of SQL. Knowledge of tools like Azure Dev Ops, Jira, Visio, Draw.io etc. Experience in AI or Gen AI projects. Skills Nice To Have Development experience of 2 or more years will be good to have Experience on Big Data Tools – not limited to – Python, Spark + Python, HIVE, HBASE, Sqoop, CouchDB, MongoDB, MS SQL, Cassandra, Kafka Knowledge of Data Analysis Tools – (Online analytical processing (OLAP), ETL frameworks) Knowledge of Enterprise modeling tool and data integration platform (Erwin, Embarcadero, Informatica, Talend, SSIS, DataStage, Pentaho) Knowledge of Enterprise business intelligence platform (Tableau, PowerBI, Business Objects, Microstrategy, Cognos) Knowledge of Enterprise data warehousing platform (Oracle, Microsoft, DB2, Snowflake, AWS, Azure, Google Cloud Platform) Process Specific Skills Delivery Domain - Software Development – SDLC & Agile Certifications Business Domain - US Healthcare & Payer Analytics Payment Integrity Fraud, Waste & Abuse Claims Management Soft Skills Understanding of Healthcare business vertical and the business terms within Good analytical skills. Strong communication skills - oral and written Ability to work with various stakeholders across different geographical locations Should be able to function as an Individual Contributor as well if required. Strong aptitude to learn & implement healthcare solutions. Good Leadership Skills. Working Hours General Shift – 12PM to 9 PM Will be required to extend as per project release needs Education Requirements Master’s or bachelor’s degree from top tier colleges with good grades, preferably in a relevant field including Mathematics, Statistics, Computer Science or equivalent experience Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Senior Business Analyst/Lead Business Analyst – B1/B2 Employment Type: Permanent Location: Chennai Responsible Functions Product Vision & Strategy: Help with inputs on product features through market analysis to understand market landscape including competitor solutions, trends and customer needs. Stakeholder Engagement: Interact with diversified stakeholders to conduct JAD sessions and use variety of techniques to elicit, document, analyze and validate client requirements. Interface with Business team to conduct product demonstrations, evaluate, prioritize and build new features and functions. Requirements Management: Analyze and develop business requirement document (BRD) for client/business reference. Translate Business Requirements to user Stories to create, prioritize in backlog, sprint, DOD and releases using Jira for development consumption. Perform requirements review with external and internal stakeholders and resolve issues while suggesting corrective actions. Functional Solution Development: Responsible for end-to-end functional solution. Analyze the business problem and validate the key business requirements to create a complete picture of workflows and technical requirements fulfilled by existing and proposed software. Identify, define and evaluate potential product solutions, including off-the-shelf and open-source components, and system architecture to ensure that they meet business requirements. Communication & Collaboration: Act as a liaison between Business user and technical solutions/support groups to ensure proper communication between diversified teams. Collaborate with development team (including architecture, coding & testing teams) to produce/maintain additional product and project deliverables in technical design, testing & program specifications, additional test scenarios and project plan. Proactively manage expectation regarding roadblocks, in the critical path to help ensure successful delivery of the solution. Business Value: Comprehend the fundamental solution being developed/deployed – its business value & blueprint how it fits with the overall architecture, risks, and more. Drive business metrics that will help optimize business & also deep dive into data for insights as required Team Mentoring: Train & Mentor juniors in the team on a need basis Essential Functions Technologist who enjoys executing and selling Healthcare solutions. Being on the front-line of client communications. Good understanding of the US Healthcare value chain and key impact drivers [Payer and/or Provider] Knowledgeable and cognizant of how data management and science is used to solve organizational problems in the healthcare context Hands-on experience in at least one area of the data and analytics technical domains - Enterprise cloud data warehousing, integration, preparation, and visualization along with artificial intelligence, machine learning, data science, data modeling, data management, and data governance Strong problem solving and analytical skills: ability to break down a vague business problem into structured data analysis approaches & ability to work with incomplete information and take judgment-driven decisions based on experience. Primary Internal Interactions Review with the Product Manager & AVP for improvements in the product development lifecycle Assessment meeting with VP & above for additional product development features. Train & Mentor juniors in the team on a need basis. Primary External Interactions Communicate with onshore stakeholder & Executive Team Members. Help the Product Management Group set the product roadmap & help in identifying future sellable product features. Client Interactions to better understands expectations & streamline solutions. If required should be a bridge between the client and the technology teams. Skills Technical Skills Required Skills – Good Knowledge of US Healthcare with at least 3 years of experience working with various US Healthcare Payer clients Skills Must Have Good understanding of Software Development Life Cycle & Methodologies like Agile Scrum, Waterfall etc. Strong experience in requirements elicitation techniques, functional documentation, stakeholder management, business solutions validation and user walkthroughs. Strong documentation skills to create BRD, FSD, Process Flows, User Stories Strong presentation skills. Basic knowledge of SQL. Knowledge of tools like Jira, Visio, Draw.io etc. Skills Nice To Have Development experience of 1 or 2 years will be good to have Experience on Big Data Tools – not limited to – Python, Spark + Python, HIVE, HBASE, Sqoop, CouchDB, MongoDB, MS SQL, Cassandra, Kafka Knowledge of Data Analysis Tools – (Online analytical processing (OLAP), ETL frameworks) Knowledge of Enterprise modeling tool and data integration platform (Erwin, Embarcadero, Informatica, Talend, SSIS, DataStage, Pentaho) Knowledge of Enterprise business intelligence platform (Tableau, PowerBI, Business Objects, Microstrategy, Cognos) Knowledge of Enterprise data warehousing platform (Oracle, Microsoft, DB2, Snowflake, AWS, Azure, Google Cloud Platform) Process Specific Skills Delivery Domain - Software Development – SDLC & Agile Certifications Business Domain - US Healthcare Insurance & Payer Analytics Fraud, Waste & Abuse Payer Management Code Classification Management Soft Skills Understanding of Healthcare business vertical and the business terms within Good analytical skills. Strong communication skills - oral and verbal Ability to work with various stakeholders across different geographical locations Should be able to function as an Individual Contributor as well if required. Strong aptitude to learn & implement healthcare solutions. Ability to work independently Working Hours General Shift – 12PM to 9 PM Will be required to extend as per project release needs Education Requirements Master’s or Bachelor’s degree from top tier colleges with good grades, preferably in a relevant field including Mathematics, Statistics, Computer Science or equivalent experience Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Madhya Pradesh, India

Remote

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description You are a strategic thinker passionate about driving solutions in data visualization. You have found the right team. As a Data Visualization Associate within our Databricks team, you will be responsible for designing, developing, and optimizing data models to support data integration, transformation, and analytics. We value your expertise in handling data from various sources and your commitment to ensuring scalable, efficient, and high-quality data solutions. Job Responsibilities Design and implement data models (conceptual, logical, and physical) to support business requirements. Hands on Erwin tool experience is added advantage. Work with structured and unstructured data from multiple sources and integrate them into Databricks. Develop ETL/ELT pipelines to extract, transform, and load data efficiently. Optimize data storage, processing, and performance in Databricks. Collaborate with data engineers, analysts, and business stakeholders to understand data needs. Ensure data governance, quality, and compliance with industry standards. Create and maintain documentation for data models, pipelines, and architectures. Troubleshoot and optimize queries and workflows for performance improvement. Create/modify queries at consumption level for end users. Required Qualifications, Capabilities, And Skills 5+ years of experience in data modeling/data engineering. Strong expertise in Databricks, Delta Lake, Apache Spark, advanced queries Experience with SQL, Python for data manipulation. Knowledge of ETL/ELT processes and data pipeline development. Hands-on experience with data warehousing, relational databases, and NoSQL. Familiarity with data governance, security, and compliance best practices. Strong problem-solving skills and the ability to work in an agile environment. Preferred Qualifications, Capabilities And Skills Experience working with large-scale data systems and streaming data. Knowledge of business intelligence (BI) tools and reporting frameworks. Experience in finance domain (P&A, Markets etc) will be preferable. Experience with cloud platforms (AWS, Azure, or GCP) is a plus. Experience with OLAP tools (TM1, Essbase, Atoti etc) is a plus. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan’s Commercial & Investment Bank is a global leader across banking, markets, securities services and payments. Corporations, governments and institutions throughout the world entrust us with their business in more than 100 countries. The Commercial & Investment Bank provides strategic advice, raises capital, manages risk and extends liquidity in markets around the world. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Apply with updated CV to hr@bitstringit.com Main Task : Maintain & develop data platforms based on Microsoft Fabric for Business Intelligence & Databricks for real-time data analytics Design, implement and maintain standardized production-grade data pipelines using modern data transformation processes and workflows for SAP, MS Dynamics, on-premise or cloud. Develop an enterprise-scale cloud-based Data Lake for business intelligence solutions. Translate business and customer needs into data collection, preparation and processing requirements. Optimize the performance of algorithms developed by Data Scientists General administration and monitoring of the data platforms Competencies : working with structured & unstructured data experienced in various database technologies (RDBMS, OLAP, Timeseries, etc.) solid programming skills (Python, SQL, Scala is a plus) experience in Microsoft Fabric (incl. Warehouse, Lakehouse, Data Factory, DataFlow Gen2, Semantic Model) and/or Databricks (Spark) proficient in PowerBI experienced working with APIs proficient in security best practices data centric Azure know-how is a plus (Storage, Networking, Security, Billing) Education / experience / language: • Bachelor or Master degree in business informatics, computer science, or equal • A background in software engineering (e.g., agile programming, project organization) and experience with human centered design would be desirable • Extensive experience in handling large data sets • Experience working at least 5 years as a data engineer, preferably in an industrial company • Analytical problem-solving skills and the ability to assimilate complex information • Programming experience in modern data-oriented languages (SQL, Python) • Experience with Apache Spark and DevOps Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies